Jan 23 16:16:37 crc systemd[1]: Starting Kubernetes Kubelet... Jan 23 16:16:37 crc restorecon[4690]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 16:16:37 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:16:38 crc restorecon[4690]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:16:38 crc restorecon[4690]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 23 16:16:38 crc kubenswrapper[4718]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 16:16:38 crc kubenswrapper[4718]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 23 16:16:38 crc kubenswrapper[4718]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 16:16:38 crc kubenswrapper[4718]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 16:16:38 crc kubenswrapper[4718]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 23 16:16:38 crc kubenswrapper[4718]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.983726 4718 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986875 4718 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986895 4718 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986900 4718 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986910 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986916 4718 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986921 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986927 4718 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986932 4718 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986938 4718 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986945 4718 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986952 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986958 4718 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986964 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986969 4718 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986974 4718 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986979 4718 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986982 4718 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986987 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986991 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.986995 4718 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987000 4718 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987003 4718 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987007 4718 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987011 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987015 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987019 4718 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987022 4718 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987026 4718 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987029 4718 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987033 4718 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987038 4718 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987042 4718 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987046 4718 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987051 4718 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987054 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987058 4718 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987063 4718 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987067 4718 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987070 4718 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987074 4718 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987078 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987082 4718 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987085 4718 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987089 4718 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987093 4718 feature_gate.go:330] unrecognized feature gate: Example Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987098 4718 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987101 4718 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987105 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987108 4718 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987112 4718 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987115 4718 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987119 4718 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987123 4718 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987127 4718 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987130 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987134 4718 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987137 4718 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987141 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987145 4718 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987150 4718 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987154 4718 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987157 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987161 4718 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987166 4718 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987170 4718 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987174 4718 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987178 4718 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987181 4718 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987185 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987189 4718 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.987192 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987437 4718 flags.go:64] FLAG: --address="0.0.0.0" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987447 4718 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987455 4718 flags.go:64] FLAG: --anonymous-auth="true" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987460 4718 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987465 4718 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987471 4718 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987478 4718 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987484 4718 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987488 4718 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987492 4718 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987496 4718 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987501 4718 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987505 4718 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987509 4718 flags.go:64] FLAG: --cgroup-root="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987513 4718 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987517 4718 flags.go:64] FLAG: --client-ca-file="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987521 4718 flags.go:64] FLAG: --cloud-config="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987525 4718 flags.go:64] FLAG: --cloud-provider="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987529 4718 flags.go:64] FLAG: --cluster-dns="[]" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987534 4718 flags.go:64] FLAG: --cluster-domain="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987538 4718 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987542 4718 flags.go:64] FLAG: --config-dir="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987546 4718 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987550 4718 flags.go:64] FLAG: --container-log-max-files="5" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987556 4718 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987560 4718 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987564 4718 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987568 4718 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987572 4718 flags.go:64] FLAG: --contention-profiling="false" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987576 4718 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987580 4718 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987584 4718 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987588 4718 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987593 4718 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987598 4718 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987602 4718 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987606 4718 flags.go:64] FLAG: --enable-load-reader="false" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987610 4718 flags.go:64] FLAG: --enable-server="true" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987616 4718 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987622 4718 flags.go:64] FLAG: --event-burst="100" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987639 4718 flags.go:64] FLAG: --event-qps="50" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987643 4718 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987648 4718 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987652 4718 flags.go:64] FLAG: --eviction-hard="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987659 4718 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987663 4718 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987669 4718 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987675 4718 flags.go:64] FLAG: --eviction-soft="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987680 4718 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987685 4718 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987690 4718 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987694 4718 flags.go:64] FLAG: --experimental-mounter-path="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987699 4718 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987704 4718 flags.go:64] FLAG: --fail-swap-on="true" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987709 4718 flags.go:64] FLAG: --feature-gates="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987717 4718 flags.go:64] FLAG: --file-check-frequency="20s" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987721 4718 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987726 4718 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987730 4718 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987734 4718 flags.go:64] FLAG: --healthz-port="10248" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987738 4718 flags.go:64] FLAG: --help="false" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987742 4718 flags.go:64] FLAG: --hostname-override="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987746 4718 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987751 4718 flags.go:64] FLAG: --http-check-frequency="20s" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987755 4718 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987759 4718 flags.go:64] FLAG: --image-credential-provider-config="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987763 4718 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987767 4718 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987771 4718 flags.go:64] FLAG: --image-service-endpoint="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987776 4718 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987781 4718 flags.go:64] FLAG: --kube-api-burst="100" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987785 4718 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987790 4718 flags.go:64] FLAG: --kube-api-qps="50" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987794 4718 flags.go:64] FLAG: --kube-reserved="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987798 4718 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987802 4718 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987806 4718 flags.go:64] FLAG: --kubelet-cgroups="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987810 4718 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987814 4718 flags.go:64] FLAG: --lock-file="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987818 4718 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987822 4718 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987826 4718 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987832 4718 flags.go:64] FLAG: --log-json-split-stream="false" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987835 4718 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987839 4718 flags.go:64] FLAG: --log-text-split-stream="false" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987844 4718 flags.go:64] FLAG: --logging-format="text" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987848 4718 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987853 4718 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987857 4718 flags.go:64] FLAG: --manifest-url="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987860 4718 flags.go:64] FLAG: --manifest-url-header="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987866 4718 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987870 4718 flags.go:64] FLAG: --max-open-files="1000000" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987875 4718 flags.go:64] FLAG: --max-pods="110" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987879 4718 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987883 4718 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987887 4718 flags.go:64] FLAG: --memory-manager-policy="None" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987891 4718 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987895 4718 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987899 4718 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987903 4718 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987913 4718 flags.go:64] FLAG: --node-status-max-images="50" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987917 4718 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987921 4718 flags.go:64] FLAG: --oom-score-adj="-999" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987926 4718 flags.go:64] FLAG: --pod-cidr="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987929 4718 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987936 4718 flags.go:64] FLAG: --pod-manifest-path="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987941 4718 flags.go:64] FLAG: --pod-max-pids="-1" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987945 4718 flags.go:64] FLAG: --pods-per-core="0" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987950 4718 flags.go:64] FLAG: --port="10250" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987954 4718 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987958 4718 flags.go:64] FLAG: --provider-id="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987962 4718 flags.go:64] FLAG: --qos-reserved="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987966 4718 flags.go:64] FLAG: --read-only-port="10255" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987970 4718 flags.go:64] FLAG: --register-node="true" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987974 4718 flags.go:64] FLAG: --register-schedulable="true" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987978 4718 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987985 4718 flags.go:64] FLAG: --registry-burst="10" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987989 4718 flags.go:64] FLAG: --registry-qps="5" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987993 4718 flags.go:64] FLAG: --reserved-cpus="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.987996 4718 flags.go:64] FLAG: --reserved-memory="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988001 4718 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988005 4718 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988010 4718 flags.go:64] FLAG: --rotate-certificates="false" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988013 4718 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988018 4718 flags.go:64] FLAG: --runonce="false" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988022 4718 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988026 4718 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988030 4718 flags.go:64] FLAG: --seccomp-default="false" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988034 4718 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988038 4718 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988042 4718 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988046 4718 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988051 4718 flags.go:64] FLAG: --storage-driver-password="root" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988055 4718 flags.go:64] FLAG: --storage-driver-secure="false" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988059 4718 flags.go:64] FLAG: --storage-driver-table="stats" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988063 4718 flags.go:64] FLAG: --storage-driver-user="root" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988067 4718 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988072 4718 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988076 4718 flags.go:64] FLAG: --system-cgroups="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988080 4718 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988086 4718 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988090 4718 flags.go:64] FLAG: --tls-cert-file="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988095 4718 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988100 4718 flags.go:64] FLAG: --tls-min-version="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988104 4718 flags.go:64] FLAG: --tls-private-key-file="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988107 4718 flags.go:64] FLAG: --topology-manager-policy="none" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988111 4718 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988116 4718 flags.go:64] FLAG: --topology-manager-scope="container" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988120 4718 flags.go:64] FLAG: --v="2" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988125 4718 flags.go:64] FLAG: --version="false" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988130 4718 flags.go:64] FLAG: --vmodule="" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988134 4718 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 23 16:16:38 crc kubenswrapper[4718]: I0123 16:16:38.988139 4718 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.988280 4718 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.988285 4718 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.988289 4718 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.988293 4718 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.988297 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.988300 4718 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.988304 4718 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.988307 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.988311 4718 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.988314 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.988318 4718 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.988323 4718 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.988327 4718 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.988335 4718 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 16:16:38 crc kubenswrapper[4718]: W0123 16:16:38.988340 4718 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988345 4718 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988349 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988353 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988356 4718 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988360 4718 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988363 4718 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988367 4718 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988370 4718 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988374 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988377 4718 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988381 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988385 4718 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988389 4718 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988392 4718 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988395 4718 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988399 4718 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988403 4718 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988406 4718 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988410 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988413 4718 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988417 4718 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988420 4718 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988424 4718 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988427 4718 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988431 4718 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988434 4718 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988438 4718 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988442 4718 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988445 4718 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988448 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988453 4718 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988462 4718 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988465 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988469 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988472 4718 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988475 4718 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988479 4718 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988482 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988485 4718 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988489 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988493 4718 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988498 4718 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988502 4718 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988505 4718 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988509 4718 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988512 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988516 4718 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988521 4718 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988524 4718 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988528 4718 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988532 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988535 4718 feature_gate.go:330] unrecognized feature gate: Example Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988538 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988542 4718 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988545 4718 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.988549 4718 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:38.988554 4718 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:38.999865 4718 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:38.999882 4718 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.999937 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.999942 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.999947 4718 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.999953 4718 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.999957 4718 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.999961 4718 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.999965 4718 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.999968 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.999972 4718 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.999976 4718 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.999980 4718 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.999984 4718 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.999987 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.999991 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.999995 4718 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:38.999999 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000003 4718 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000006 4718 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000010 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000014 4718 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000018 4718 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000021 4718 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000024 4718 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000028 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000032 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000036 4718 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000040 4718 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000043 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000047 4718 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000050 4718 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000054 4718 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000058 4718 feature_gate.go:330] unrecognized feature gate: Example Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000062 4718 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000066 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000069 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000073 4718 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000076 4718 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000080 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000083 4718 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000087 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000091 4718 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000095 4718 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000098 4718 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000102 4718 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000106 4718 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000109 4718 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000113 4718 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000117 4718 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000120 4718 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000124 4718 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000129 4718 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000134 4718 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000139 4718 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000144 4718 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000148 4718 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000153 4718 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000157 4718 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000160 4718 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000164 4718 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000167 4718 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000172 4718 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000176 4718 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000182 4718 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000186 4718 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000190 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000194 4718 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000198 4718 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000201 4718 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000205 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000209 4718 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000212 4718 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.000218 4718 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000351 4718 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000359 4718 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000363 4718 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000367 4718 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000371 4718 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000375 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000379 4718 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000383 4718 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000387 4718 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000391 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000394 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000398 4718 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000402 4718 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000405 4718 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000409 4718 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000413 4718 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000416 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000421 4718 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000425 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000429 4718 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000433 4718 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000437 4718 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000440 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000444 4718 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000448 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000452 4718 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000455 4718 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000459 4718 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000463 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000466 4718 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000470 4718 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000473 4718 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000476 4718 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000480 4718 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000484 4718 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000488 4718 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000492 4718 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000498 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000502 4718 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000506 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000511 4718 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000514 4718 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000519 4718 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000523 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000528 4718 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000532 4718 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000536 4718 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000540 4718 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000544 4718 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000549 4718 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000554 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000557 4718 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000562 4718 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000566 4718 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000570 4718 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000575 4718 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000579 4718 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000584 4718 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000589 4718 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000593 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000603 4718 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000608 4718 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000612 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000616 4718 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000620 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000624 4718 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000628 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000648 4718 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000654 4718 feature_gate.go:330] unrecognized feature gate: Example Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000659 4718 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.000663 4718 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.000668 4718 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.003416 4718 server.go:940] "Client rotation is on, will bootstrap in background" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.008406 4718 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.008572 4718 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.009603 4718 server.go:997] "Starting client certificate rotation" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.009675 4718 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.010128 4718 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-28 23:25:45.293232192 +0000 UTC Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.010253 4718 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.016917 4718 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 16:16:39 crc kubenswrapper[4718]: E0123 16:16:39.018689 4718 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.019790 4718 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.029200 4718 log.go:25] "Validated CRI v1 runtime API" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.041967 4718 log.go:25] "Validated CRI v1 image API" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.043869 4718 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.047702 4718 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-23-16-12-20-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.047735 4718 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.061370 4718 manager.go:217] Machine: {Timestamp:2026-01-23 16:16:39.060273053 +0000 UTC m=+0.207515064 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:765cfa9d-30f3-4d97-8bd5-593f268463db BootID:b4c39b8c-400d-464e-b232-a4d4bf4271ad Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e4:f8:13 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e4:f8:13 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:c1:59:bf Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:33:2b:25 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b0:e6:1f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e1:f4:e6 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:22:93:f6:c4:97:85 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:c2:32:fb:6f:5c:de Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.061600 4718 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.061760 4718 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.062262 4718 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.062432 4718 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.062466 4718 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.062711 4718 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.062725 4718 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.062917 4718 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.062958 4718 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.063249 4718 state_mem.go:36] "Initialized new in-memory state store" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.063335 4718 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.064012 4718 kubelet.go:418] "Attempting to sync node with API server" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.064037 4718 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.064067 4718 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.064089 4718 kubelet.go:324] "Adding apiserver pod source" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.064101 4718 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.066683 4718 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.067155 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Jan 23 16:16:39 crc kubenswrapper[4718]: E0123 16:16:39.067254 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.067289 4718 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.067386 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Jan 23 16:16:39 crc kubenswrapper[4718]: E0123 16:16:39.067526 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.068461 4718 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.069287 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.069333 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.069349 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.069364 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.069389 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.069406 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.069422 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.069446 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.069463 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.069488 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.069508 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.069524 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.069842 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.070737 4718 server.go:1280] "Started kubelet" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.070748 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.071321 4718 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.071218 4718 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.072801 4718 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 16:16:39 crc kubenswrapper[4718]: E0123 16:16:39.072538 4718 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.204:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d6862138e90e4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 16:16:39.07066698 +0000 UTC m=+0.217909031,LastTimestamp:2026-01-23 16:16:39.07066698 +0000 UTC m=+0.217909031,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.073210 4718 server.go:460] "Adding debug handlers to kubelet server" Jan 23 16:16:39 crc systemd[1]: Started Kubernetes Kubelet. Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.075293 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.075342 4718 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.075384 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:06:03.88166475 +0000 UTC Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.075549 4718 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.075563 4718 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.075725 4718 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 16:16:39 crc kubenswrapper[4718]: E0123 16:16:39.075933 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="200ms" Jan 23 16:16:39 crc kubenswrapper[4718]: E0123 16:16:39.075583 4718 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.076711 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.076772 4718 factory.go:55] Registering systemd factory Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.076831 4718 factory.go:221] Registration of the systemd container factory successfully Jan 23 16:16:39 crc kubenswrapper[4718]: E0123 16:16:39.076842 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.082091 4718 factory.go:153] Registering CRI-O factory Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.082123 4718 factory.go:221] Registration of the crio container factory successfully Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.082198 4718 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.082228 4718 factory.go:103] Registering Raw factory Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.082250 4718 manager.go:1196] Started watching for new ooms in manager Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.084353 4718 manager.go:319] Starting recovery of all containers Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.097874 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098006 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098036 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098061 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098086 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098104 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098128 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098150 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098175 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098197 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098218 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098239 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098258 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098293 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098398 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098432 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098490 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098510 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098529 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098550 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098570 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098589 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098611 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098659 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098681 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098701 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098725 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098745 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098768 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098845 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098910 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.098931 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099003 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099028 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099055 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099084 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099112 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099676 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099765 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099780 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099803 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099819 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099836 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099850 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099868 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099881 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099898 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099914 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099928 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099945 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099959 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099971 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.099993 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100012 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100027 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100045 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100061 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100073 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100094 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100106 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100119 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100133 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100150 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100163 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100177 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100190 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100202 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100214 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100227 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100239 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100251 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100266 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100278 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100289 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100301 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100313 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100326 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.100338 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101016 4718 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101040 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101054 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101066 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101079 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101091 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101103 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101114 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101130 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101142 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101155 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101167 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101179 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101190 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101202 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101215 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101238 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101250 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101263 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101275 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101287 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101301 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101315 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101326 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101338 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101352 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101364 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101382 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101394 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101409 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101421 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101434 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101447 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101460 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101473 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101486 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101499 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101512 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101523 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101536 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101550 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101563 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101574 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101587 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101600 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101612 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101624 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101654 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101673 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101703 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101717 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101730 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101743 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101756 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101768 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101780 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101792 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101805 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101817 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101830 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101843 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101855 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101868 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101880 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101893 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101905 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101917 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101930 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101942 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101954 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101966 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101978 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.101990 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102004 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102018 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102030 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102043 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102055 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102068 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102080 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102092 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102104 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102118 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102130 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102142 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102154 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102167 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102181 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102195 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102206 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102220 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102234 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102252 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102268 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102283 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102300 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102316 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102332 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102351 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102366 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102380 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102397 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102412 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102428 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102446 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102463 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102478 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102496 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102512 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102528 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102545 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102560 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102574 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102588 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102603 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102618 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102693 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102713 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102728 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102743 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102761 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102776 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102791 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102805 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102819 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102833 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102847 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102864 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102879 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102917 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102932 4718 reconstruct.go:97] "Volume reconstruction finished" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.102942 4718 reconciler.go:26] "Reconciler: start to sync state" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.117784 4718 manager.go:324] Recovery completed Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.128162 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.130171 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.130240 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.130259 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.131276 4718 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.131301 4718 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.131339 4718 state_mem.go:36] "Initialized new in-memory state store" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.135652 4718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.138880 4718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.138946 4718 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.138988 4718 kubelet.go:2335] "Starting kubelet main sync loop" Jan 23 16:16:39 crc kubenswrapper[4718]: E0123 16:16:39.139071 4718 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.140093 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Jan 23 16:16:39 crc kubenswrapper[4718]: E0123 16:16:39.140179 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.142446 4718 policy_none.go:49] "None policy: Start" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.143478 4718 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.143521 4718 state_mem.go:35] "Initializing new in-memory state store" Jan 23 16:16:39 crc kubenswrapper[4718]: E0123 16:16:39.176395 4718 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.201527 4718 manager.go:334] "Starting Device Plugin manager" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.201680 4718 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.201706 4718 server.go:79] "Starting device plugin registration server" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.202779 4718 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.202817 4718 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.203339 4718 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.203556 4718 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.203572 4718 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 16:16:39 crc kubenswrapper[4718]: E0123 16:16:39.215476 4718 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.239670 4718 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.239807 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.241579 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.241696 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.241722 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.242309 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.243062 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.243113 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.249250 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.249926 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.249938 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.250052 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.250215 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.250137 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.250570 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.250682 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.250683 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.251859 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.251894 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.251907 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.252045 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.252133 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.252178 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.252195 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.252331 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.252414 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.252980 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.253017 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.253032 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.253222 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.253295 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.253342 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.253379 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.253351 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.253237 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.254603 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.254709 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.254774 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.254862 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.254889 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.254903 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.255080 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.255168 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.255790 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.255813 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.255823 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:39 crc kubenswrapper[4718]: E0123 16:16:39.277040 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="400ms" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.303836 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.304271 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.304348 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.304417 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.304452 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.304518 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.304759 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.305002 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.305141 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.305250 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.305362 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.305453 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.305537 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.305611 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.305411 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.305773 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.305705 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.305853 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.305815 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.305959 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 16:16:39 crc kubenswrapper[4718]: E0123 16:16:39.306627 4718 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.407439 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.407527 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.407564 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.407611 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.407699 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.407736 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.407776 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.407778 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.407869 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.407812 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.407944 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.407984 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.407994 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.408024 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.408035 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.408075 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.408084 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.408114 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.408130 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.408154 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.408162 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.408195 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.408200 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.408225 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.408244 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.408330 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.408391 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.408450 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.408508 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.408563 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.507832 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.510763 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.510851 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.510871 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.510920 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 16:16:39 crc kubenswrapper[4718]: E0123 16:16:39.511725 4718 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.588155 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.598127 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.605737 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.621078 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-ded2258007003f76394b5b49d7335bc14a7537e5df4807d2be458253524adaff WatchSource:0}: Error finding container ded2258007003f76394b5b49d7335bc14a7537e5df4807d2be458253524adaff: Status 404 returned error can't find the container with id ded2258007003f76394b5b49d7335bc14a7537e5df4807d2be458253524adaff Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.623296 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-6b27cfc4d4d4991e63aaaaccb96695b75f305872d09847f55e0421bf675e93bd WatchSource:0}: Error finding container 6b27cfc4d4d4991e63aaaaccb96695b75f305872d09847f55e0421bf675e93bd: Status 404 returned error can't find the container with id 6b27cfc4d4d4991e63aaaaccb96695b75f305872d09847f55e0421bf675e93bd Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.625826 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-f2b0a74c5e391850ccf488f80f9a7935e2eb3a3557f4928e66cf19f7ddf5464d WatchSource:0}: Error finding container f2b0a74c5e391850ccf488f80f9a7935e2eb3a3557f4928e66cf19f7ddf5464d: Status 404 returned error can't find the container with id f2b0a74c5e391850ccf488f80f9a7935e2eb3a3557f4928e66cf19f7ddf5464d Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.639147 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.644431 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:16:39 crc kubenswrapper[4718]: E0123 16:16:39.678255 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="800ms" Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.712331 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c3ca2236228fd77cc3b8c38b5f583d9f64f1d6455289b0e4568a23a7c7e2341b WatchSource:0}: Error finding container c3ca2236228fd77cc3b8c38b5f583d9f64f1d6455289b0e4568a23a7c7e2341b: Status 404 returned error can't find the container with id c3ca2236228fd77cc3b8c38b5f583d9f64f1d6455289b0e4568a23a7c7e2341b Jan 23 16:16:39 crc kubenswrapper[4718]: W0123 16:16:39.716786 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-96117cd78f5688e4db2c4f183a73a8f8e11b94537c0646bbac0f7ff82e138594 WatchSource:0}: Error finding container 96117cd78f5688e4db2c4f183a73a8f8e11b94537c0646bbac0f7ff82e138594: Status 404 returned error can't find the container with id 96117cd78f5688e4db2c4f183a73a8f8e11b94537c0646bbac0f7ff82e138594 Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.912864 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.914665 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.914723 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.914737 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:39 crc kubenswrapper[4718]: I0123 16:16:39.914772 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 16:16:39 crc kubenswrapper[4718]: E0123 16:16:39.915507 4718 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.072384 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.076220 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 11:06:23.823802071 +0000 UTC Jan 23 16:16:40 crc kubenswrapper[4718]: W0123 16:16:40.080945 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Jan 23 16:16:40 crc kubenswrapper[4718]: E0123 16:16:40.081006 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.145327 4718 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b99e4cc6402dc089f764c94a93949ef45f0eae85d5db332fb8bb8a49d2cb27f1" exitCode=0 Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.145421 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b99e4cc6402dc089f764c94a93949ef45f0eae85d5db332fb8bb8a49d2cb27f1"} Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.145599 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6b27cfc4d4d4991e63aaaaccb96695b75f305872d09847f55e0421bf675e93bd"} Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.145783 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.147109 4718 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2" exitCode=0 Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.147182 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2"} Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.147222 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"96117cd78f5688e4db2c4f183a73a8f8e11b94537c0646bbac0f7ff82e138594"} Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.147459 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.147582 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.147602 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.147719 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.149782 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329"} Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.149841 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c3ca2236228fd77cc3b8c38b5f583d9f64f1d6455289b0e4568a23a7c7e2341b"} Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.150026 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.150046 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.150054 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.151825 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d" exitCode=0 Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.152480 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d"} Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.152509 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f2b0a74c5e391850ccf488f80f9a7935e2eb3a3557f4928e66cf19f7ddf5464d"} Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.152610 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.153601 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.153670 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.153686 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.155203 4718 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d" exitCode=0 Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.155249 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d"} Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.155270 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ded2258007003f76394b5b49d7335bc14a7537e5df4807d2be458253524adaff"} Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.155374 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.156291 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.156340 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.156353 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.156908 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.158396 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.158417 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.158431 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:40 crc kubenswrapper[4718]: W0123 16:16:40.166472 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Jan 23 16:16:40 crc kubenswrapper[4718]: E0123 16:16:40.166568 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:16:40 crc kubenswrapper[4718]: W0123 16:16:40.228748 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Jan 23 16:16:40 crc kubenswrapper[4718]: E0123 16:16:40.228874 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:16:40 crc kubenswrapper[4718]: W0123 16:16:40.375695 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Jan 23 16:16:40 crc kubenswrapper[4718]: E0123 16:16:40.375810 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:16:40 crc kubenswrapper[4718]: E0123 16:16:40.479514 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="1.6s" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.716065 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.717621 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.717676 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.717685 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:40 crc kubenswrapper[4718]: I0123 16:16:40.717726 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.076381 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 19:33:14.380090136 +0000 UTC Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.160490 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1e27ff168d50068ac4e86f6b706be71d8eee9bc268b6d2855b366b5cd3eb4740"} Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.160558 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"81893e5afcb0a719b0edb43b630ba52d95210ac4b3c57c5c8b3f2a9676a06d4e"} Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.160571 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b207b699cf31af5be9d845bc3ec1c19a61f4b87788e3dee4314b0a36104c710e"} Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.160733 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.161795 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.161835 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.161850 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.165028 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3"} Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.165063 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089"} Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.165077 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad"} Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.165109 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.166774 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.166873 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.166891 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.171525 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637"} Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.171588 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a"} Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.171610 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c"} Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.171654 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44"} Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.174834 4718 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d" exitCode=0 Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.174871 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d"} Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.175070 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.176339 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.176387 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.176405 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.177943 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e0e74fd97b17b8aff6be92a7a3dbf07fd751efb5132967e24568e84ceddbc828"} Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.178077 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.179230 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.179291 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.179308 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:41 crc kubenswrapper[4718]: I0123 16:16:41.219779 4718 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 16:16:42 crc kubenswrapper[4718]: I0123 16:16:42.076602 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:52:47.056983975 +0000 UTC Jan 23 16:16:42 crc kubenswrapper[4718]: I0123 16:16:42.184069 4718 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5" exitCode=0 Jan 23 16:16:42 crc kubenswrapper[4718]: I0123 16:16:42.184203 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5"} Jan 23 16:16:42 crc kubenswrapper[4718]: I0123 16:16:42.184397 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:42 crc kubenswrapper[4718]: I0123 16:16:42.186276 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:42 crc kubenswrapper[4718]: I0123 16:16:42.186360 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:42 crc kubenswrapper[4718]: I0123 16:16:42.186387 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:42 crc kubenswrapper[4718]: I0123 16:16:42.191340 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a"} Jan 23 16:16:42 crc kubenswrapper[4718]: I0123 16:16:42.191478 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:42 crc kubenswrapper[4718]: I0123 16:16:42.191784 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:42 crc kubenswrapper[4718]: I0123 16:16:42.192949 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:42 crc kubenswrapper[4718]: I0123 16:16:42.193009 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:42 crc kubenswrapper[4718]: I0123 16:16:42.193029 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:42 crc kubenswrapper[4718]: I0123 16:16:42.195024 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:42 crc kubenswrapper[4718]: I0123 16:16:42.195133 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:42 crc kubenswrapper[4718]: I0123 16:16:42.195156 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:42 crc kubenswrapper[4718]: I0123 16:16:42.244463 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:16:42 crc kubenswrapper[4718]: I0123 16:16:42.986189 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.077465 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 10:34:42.637379115 +0000 UTC Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.200908 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97"} Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.200998 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449"} Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.201033 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe"} Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.200945 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.201122 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.201196 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.202445 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.202500 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.202504 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.202559 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.202582 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.202518 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.431729 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.431855 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.433055 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.433136 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.433146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.528740 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:16:43 crc kubenswrapper[4718]: I0123 16:16:43.540979 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:16:44 crc kubenswrapper[4718]: I0123 16:16:44.078057 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 06:01:27.232774651 +0000 UTC Jan 23 16:16:44 crc kubenswrapper[4718]: I0123 16:16:44.217753 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 16:16:44 crc kubenswrapper[4718]: I0123 16:16:44.217812 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:44 crc kubenswrapper[4718]: I0123 16:16:44.217828 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:44 crc kubenswrapper[4718]: I0123 16:16:44.218040 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:44 crc kubenswrapper[4718]: I0123 16:16:44.219051 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:44 crc kubenswrapper[4718]: I0123 16:16:44.219108 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:44 crc kubenswrapper[4718]: I0123 16:16:44.219128 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:44 crc kubenswrapper[4718]: I0123 16:16:44.219324 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:44 crc kubenswrapper[4718]: I0123 16:16:44.219366 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:44 crc kubenswrapper[4718]: I0123 16:16:44.219385 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:44 crc kubenswrapper[4718]: I0123 16:16:44.220211 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:44 crc kubenswrapper[4718]: I0123 16:16:44.220232 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:44 crc kubenswrapper[4718]: I0123 16:16:44.220244 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:44 crc kubenswrapper[4718]: I0123 16:16:44.220323 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a"} Jan 23 16:16:44 crc kubenswrapper[4718]: I0123 16:16:44.220433 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6"} Jan 23 16:16:45 crc kubenswrapper[4718]: I0123 16:16:45.055916 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:16:45 crc kubenswrapper[4718]: I0123 16:16:45.078260 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 17:57:11.264598367 +0000 UTC Jan 23 16:16:45 crc kubenswrapper[4718]: I0123 16:16:45.219932 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 16:16:45 crc kubenswrapper[4718]: I0123 16:16:45.220294 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:45 crc kubenswrapper[4718]: I0123 16:16:45.220167 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:45 crc kubenswrapper[4718]: I0123 16:16:45.219951 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:45 crc kubenswrapper[4718]: I0123 16:16:45.221965 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:45 crc kubenswrapper[4718]: I0123 16:16:45.222012 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:45 crc kubenswrapper[4718]: I0123 16:16:45.222027 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:45 crc kubenswrapper[4718]: I0123 16:16:45.222292 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:45 crc kubenswrapper[4718]: I0123 16:16:45.222401 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:45 crc kubenswrapper[4718]: I0123 16:16:45.222485 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:45 crc kubenswrapper[4718]: I0123 16:16:45.222321 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:45 crc kubenswrapper[4718]: I0123 16:16:45.222673 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:45 crc kubenswrapper[4718]: I0123 16:16:45.222703 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:45 crc kubenswrapper[4718]: I0123 16:16:45.986746 4718 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 16:16:45 crc kubenswrapper[4718]: I0123 16:16:45.986885 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 16:16:46 crc kubenswrapper[4718]: I0123 16:16:46.078993 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 09:01:49.737590434 +0000 UTC Jan 23 16:16:46 crc kubenswrapper[4718]: I0123 16:16:46.558307 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:16:46 crc kubenswrapper[4718]: I0123 16:16:46.559187 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:46 crc kubenswrapper[4718]: I0123 16:16:46.561162 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:46 crc kubenswrapper[4718]: I0123 16:16:46.561201 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:46 crc kubenswrapper[4718]: I0123 16:16:46.561215 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:46 crc kubenswrapper[4718]: I0123 16:16:46.566565 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:16:47 crc kubenswrapper[4718]: I0123 16:16:47.080027 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:06:08.501869751 +0000 UTC Jan 23 16:16:47 crc kubenswrapper[4718]: I0123 16:16:47.224730 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:47 crc kubenswrapper[4718]: I0123 16:16:47.225665 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:47 crc kubenswrapper[4718]: I0123 16:16:47.225688 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:47 crc kubenswrapper[4718]: I0123 16:16:47.225696 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:48 crc kubenswrapper[4718]: I0123 16:16:48.080972 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:44:13.797123304 +0000 UTC Jan 23 16:16:48 crc kubenswrapper[4718]: I0123 16:16:48.199169 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 23 16:16:48 crc kubenswrapper[4718]: I0123 16:16:48.199503 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:48 crc kubenswrapper[4718]: I0123 16:16:48.201324 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:48 crc kubenswrapper[4718]: I0123 16:16:48.201382 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:48 crc kubenswrapper[4718]: I0123 16:16:48.201403 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:48 crc kubenswrapper[4718]: I0123 16:16:48.335383 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:16:48 crc kubenswrapper[4718]: I0123 16:16:48.335696 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:48 crc kubenswrapper[4718]: I0123 16:16:48.337443 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:48 crc kubenswrapper[4718]: I0123 16:16:48.337510 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:48 crc kubenswrapper[4718]: I0123 16:16:48.337537 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:48 crc kubenswrapper[4718]: I0123 16:16:48.796197 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 23 16:16:48 crc kubenswrapper[4718]: I0123 16:16:48.796602 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:48 crc kubenswrapper[4718]: I0123 16:16:48.798569 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:48 crc kubenswrapper[4718]: I0123 16:16:48.798674 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:48 crc kubenswrapper[4718]: I0123 16:16:48.798704 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:49 crc kubenswrapper[4718]: I0123 16:16:49.081868 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 12:50:53.20881241 +0000 UTC Jan 23 16:16:49 crc kubenswrapper[4718]: E0123 16:16:49.215882 4718 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 16:16:50 crc kubenswrapper[4718]: I0123 16:16:50.082862 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 04:57:41.049098439 +0000 UTC Jan 23 16:16:50 crc kubenswrapper[4718]: E0123 16:16:50.718405 4718 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 23 16:16:51 crc kubenswrapper[4718]: I0123 16:16:51.072564 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 23 16:16:51 crc kubenswrapper[4718]: I0123 16:16:51.083839 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 02:11:27.011051653 +0000 UTC Jan 23 16:16:51 crc kubenswrapper[4718]: E0123 16:16:51.225336 4718 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 16:16:52 crc kubenswrapper[4718]: I0123 16:16:52.085252 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 08:12:59.306371766 +0000 UTC Jan 23 16:16:52 crc kubenswrapper[4718]: I0123 16:16:52.087041 4718 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 16:16:52 crc kubenswrapper[4718]: I0123 16:16:52.087091 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 16:16:52 crc kubenswrapper[4718]: I0123 16:16:52.091686 4718 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 16:16:52 crc kubenswrapper[4718]: I0123 16:16:52.091965 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 16:16:52 crc kubenswrapper[4718]: I0123 16:16:52.318963 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:52 crc kubenswrapper[4718]: I0123 16:16:52.323333 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:52 crc kubenswrapper[4718]: I0123 16:16:52.323414 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:52 crc kubenswrapper[4718]: I0123 16:16:52.323437 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:52 crc kubenswrapper[4718]: I0123 16:16:52.323464 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 16:16:53 crc kubenswrapper[4718]: I0123 16:16:53.086066 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 15:15:31.226585333 +0000 UTC Jan 23 16:16:53 crc kubenswrapper[4718]: I0123 16:16:53.536144 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:16:53 crc kubenswrapper[4718]: I0123 16:16:53.536324 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:53 crc kubenswrapper[4718]: I0123 16:16:53.538012 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:53 crc kubenswrapper[4718]: I0123 16:16:53.538076 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:53 crc kubenswrapper[4718]: I0123 16:16:53.538097 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:53 crc kubenswrapper[4718]: I0123 16:16:53.548892 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:16:53 crc kubenswrapper[4718]: I0123 16:16:53.549265 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:53 crc kubenswrapper[4718]: I0123 16:16:53.551074 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:53 crc kubenswrapper[4718]: I0123 16:16:53.551248 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:53 crc kubenswrapper[4718]: I0123 16:16:53.551377 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:53 crc kubenswrapper[4718]: I0123 16:16:53.555665 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:16:54 crc kubenswrapper[4718]: I0123 16:16:54.087277 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 18:48:33.313427379 +0000 UTC Jan 23 16:16:54 crc kubenswrapper[4718]: I0123 16:16:54.242805 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:16:54 crc kubenswrapper[4718]: I0123 16:16:54.244169 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:16:54 crc kubenswrapper[4718]: I0123 16:16:54.244248 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:16:54 crc kubenswrapper[4718]: I0123 16:16:54.244269 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:16:55 crc kubenswrapper[4718]: I0123 16:16:55.089025 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 23:36:00.213775861 +0000 UTC Jan 23 16:16:55 crc kubenswrapper[4718]: I0123 16:16:55.394525 4718 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 16:16:55 crc kubenswrapper[4718]: I0123 16:16:55.413607 4718 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 16:16:55 crc kubenswrapper[4718]: I0123 16:16:55.987487 4718 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 16:16:55 crc kubenswrapper[4718]: I0123 16:16:55.987619 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 16:16:56 crc kubenswrapper[4718]: I0123 16:16:56.090089 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:09:43.994392798 +0000 UTC Jan 23 16:16:56 crc kubenswrapper[4718]: I0123 16:16:56.793610 4718 csr.go:261] certificate signing request csr-r2755 is approved, waiting to be issued Jan 23 16:16:56 crc kubenswrapper[4718]: I0123 16:16:56.804138 4718 csr.go:257] certificate signing request csr-r2755 is issued Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.071929 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.073729 4718 trace.go:236] Trace[439490509]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 16:16:43.058) (total time: 14014ms): Jan 23 16:16:57 crc kubenswrapper[4718]: Trace[439490509]: ---"Objects listed" error: 14014ms (16:16:57.073) Jan 23 16:16:57 crc kubenswrapper[4718]: Trace[439490509]: [14.014986196s] [14.014986196s] END Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.073761 4718 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.074362 4718 trace.go:236] Trace[102668375]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 16:16:42.067) (total time: 15006ms): Jan 23 16:16:57 crc kubenswrapper[4718]: Trace[102668375]: ---"Objects listed" error: 15006ms (16:16:57.073) Jan 23 16:16:57 crc kubenswrapper[4718]: Trace[102668375]: [15.00615067s] [15.00615067s] END Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.074389 4718 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.074867 4718 trace.go:236] Trace[1762543562]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 16:16:42.455) (total time: 14619ms): Jan 23 16:16:57 crc kubenswrapper[4718]: Trace[1762543562]: ---"Objects listed" error: 14619ms (16:16:57.074) Jan 23 16:16:57 crc kubenswrapper[4718]: Trace[1762543562]: [14.6191482s] [14.6191482s] END Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.074888 4718 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.075799 4718 apiserver.go:52] "Watching apiserver" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.077886 4718 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.083007 4718 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.083910 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.085486 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.085612 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.085748 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.085900 4718 trace.go:236] Trace[2143881089]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 16:16:42.160) (total time: 14925ms): Jan 23 16:16:57 crc kubenswrapper[4718]: Trace[2143881089]: ---"Objects listed" error: 14924ms (16:16:57.085) Jan 23 16:16:57 crc kubenswrapper[4718]: Trace[2143881089]: [14.925039129s] [14.925039129s] END Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.085931 4718 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.085968 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.086075 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.086271 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.086508 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.086916 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.086599 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.087648 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.087931 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.089822 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.089987 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.089992 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.090185 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 20:51:16.056586104 +0000 UTC Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.090434 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.090865 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.091017 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.091096 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.124246 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.135747 4718 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:57656->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.136026 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:57656->192.168.126.11:17697: read: connection reset by peer" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.136546 4718 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.136714 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.166148 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.177457 4718 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179307 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179355 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179393 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179413 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179437 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179462 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179482 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179502 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179523 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179543 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179582 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179604 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179624 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179658 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179681 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179699 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179720 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179739 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179771 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179796 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179789 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179814 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179933 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179967 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.179996 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180023 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180049 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180076 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180101 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180128 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180154 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180175 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180223 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180250 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180268 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180285 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180307 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180325 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180343 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180364 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180396 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180413 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180453 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180471 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180490 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180513 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180533 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180550 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180580 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180598 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180615 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180643 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180667 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180740 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180764 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180785 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180805 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180851 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180916 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180947 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.180974 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181002 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181032 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181060 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181081 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181104 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181121 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181137 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181156 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181175 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181194 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181210 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181230 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181247 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181269 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181288 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181304 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181321 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181338 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181354 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181370 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181385 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181400 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181430 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181446 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181462 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181481 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181498 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181515 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181532 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181551 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181566 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181582 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181597 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181612 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181645 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181667 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181054 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181896 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181910 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181097 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181247 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181938 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181515 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181509 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181558 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181656 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181689 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181701 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181738 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.181880 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.182056 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.182118 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.182125 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.182150 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.182235 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.182278 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.182297 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.182389 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.182470 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.182648 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.182930 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.183147 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.183169 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.183655 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.183679 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.183744 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.183761 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.183778 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.183795 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.183811 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.183830 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.183846 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.183865 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.183887 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.184156 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.184177 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.184447 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.184655 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.184991 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185031 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185095 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185125 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.185130 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:16:57.685109991 +0000 UTC m=+18.832351982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185388 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185435 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.183903 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.183501 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185495 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185521 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185541 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185559 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185578 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185644 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185666 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185684 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185700 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185716 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185733 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185750 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185755 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185769 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185790 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185794 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185813 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185832 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185852 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185852 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185874 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185894 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185914 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185935 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185954 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185973 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.185990 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186008 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186026 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186046 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186068 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186089 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186107 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186139 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186227 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186247 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186299 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186335 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186359 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186379 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186396 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186414 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186431 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186472 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186489 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186506 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186523 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186544 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186563 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186597 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186617 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186656 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186682 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186702 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186728 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186772 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186792 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186812 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186833 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186852 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186869 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186931 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186985 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187003 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187020 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187037 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187057 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187278 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187346 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187366 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187385 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187401 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187418 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187541 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187609 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187645 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187849 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187872 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187888 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187904 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187920 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.195204 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.195366 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.195440 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.195501 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.195562 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.195609 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.195682 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.195751 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.196716 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.196789 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.196823 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.196906 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.196946 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.196975 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.197010 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.197041 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.197067 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.197118 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.197155 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.197181 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.197208 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.200117 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.202707 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.202792 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.203602 4718 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.204131 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.204487 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.207809 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.209871 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.211117 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.211247 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.211340 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.211418 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.211493 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.211572 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.211711 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.211929 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.211989 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212017 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212034 4718 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212048 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212063 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212091 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212110 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212127 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212145 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212158 4718 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212173 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212185 4718 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212243 4718 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212264 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212282 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212298 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212317 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212331 4718 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212343 4718 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212355 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212370 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212451 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212464 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212480 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212492 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212507 4718 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212520 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212534 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212545 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212557 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212569 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212582 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212594 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212608 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.212623 4718 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186390 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186477 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186766 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186769 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.186875 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187007 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187287 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.187542 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.188971 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.189304 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.189342 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.189712 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.190015 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.190338 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.190794 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.190948 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.191188 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.193262 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.193519 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.193884 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.194289 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.194777 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.195014 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.195013 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.195091 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.196233 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.196369 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.196397 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.196460 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.196547 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.196683 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.199888 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.200288 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.200417 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.200758 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.200971 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.200996 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.201417 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.201431 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.201709 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.201687 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.201851 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.201839 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.202002 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.202027 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.202216 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.202250 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.202285 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.202297 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.202653 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.202698 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.202770 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.202994 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.203116 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.203296 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.203769 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.203785 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.204317 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.204469 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.215702 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:16:57.715662539 +0000 UTC m=+18.862904550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.204496 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.204763 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.204994 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.205057 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.205149 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.215770 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.205422 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.207784 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.207856 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.208643 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.209354 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.210664 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.211175 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.211519 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.211882 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.216092 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:16:57.716068311 +0000 UTC m=+18.863310302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.211891 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.213270 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.213720 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.217733 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.224799 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.218571 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.221665 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.222099 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.222108 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.224725 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.225032 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.225872 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.226128 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.226952 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.226975 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.226991 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.227044 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 16:16:57.727025336 +0000 UTC m=+18.874267327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.227128 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.230275 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.230658 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.230308 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.231373 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.231681 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.231219 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.231074 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.231928 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.232301 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.231200 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.231309 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.231335 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.231333 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.233049 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.233138 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.233237 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.233287 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.233387 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.233441 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.233832 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.234057 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.234200 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.234445 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.234854 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.237917 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.239158 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.239163 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.239385 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.239392 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.239667 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.240645 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.241712 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.241845 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.242158 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.242378 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.242807 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.251972 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.252003 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.252017 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.252067 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 16:16:57.752050006 +0000 UTC m=+18.899291997 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.252452 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.252787 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.253069 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.253088 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.253155 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.253168 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.253182 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.253328 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.253601 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.253617 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.254005 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.254236 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.254362 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.254525 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.254654 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.257724 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.257908 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.258182 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.258211 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.258808 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.258928 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.259105 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.259861 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.260624 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a" exitCode=255 Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.260684 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a"} Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.260820 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.261807 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.261908 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.261938 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.262705 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.262710 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.262843 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.262903 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.263108 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.263962 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.268822 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.274210 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.282297 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.283686 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.286980 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.288118 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.290949 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.300448 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.310769 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313197 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313361 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313556 4718 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313643 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313663 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313753 4718 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313771 4718 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313785 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313798 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313807 4718 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313817 4718 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313827 4718 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313840 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313851 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313861 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313873 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313884 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313894 4718 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313907 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313918 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313937 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313948 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313958 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313968 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313977 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313987 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313996 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314006 4718 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314015 4718 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314025 4718 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314070 4718 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314082 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314092 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314102 4718 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314113 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314123 4718 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314133 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314144 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314156 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314189 4718 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314200 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314211 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314221 4718 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314244 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314254 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314264 4718 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314273 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314284 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314294 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314304 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314313 4718 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314324 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314349 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314365 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314375 4718 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314385 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314396 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314408 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314418 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.313372 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314429 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314569 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314586 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314599 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314610 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314643 4718 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314654 4718 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314664 4718 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314683 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314693 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314708 4718 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.314722 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.315160 4718 scope.go:117] "RemoveContainer" containerID="dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316069 4718 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316123 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316135 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316145 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316157 4718 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316169 4718 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316179 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316192 4718 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316204 4718 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316214 4718 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316223 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316233 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316243 4718 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316252 4718 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316262 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316290 4718 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316301 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316312 4718 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316328 4718 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316342 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316373 4718 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316395 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316405 4718 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316420 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316430 4718 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316440 4718 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316455 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316465 4718 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316477 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316488 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316497 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316506 4718 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316515 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316525 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316534 4718 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316546 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316556 4718 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316567 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316576 4718 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316586 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316595 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316607 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316670 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316683 4718 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316693 4718 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316702 4718 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316727 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316737 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316748 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316757 4718 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316767 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316779 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316791 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316801 4718 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316813 4718 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316821 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316831 4718 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316840 4718 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316851 4718 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316860 4718 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316870 4718 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316880 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316890 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316899 4718 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316909 4718 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316919 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316928 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316937 4718 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316947 4718 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316957 4718 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316966 4718 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316975 4718 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316986 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.316997 4718 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.317008 4718 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.317018 4718 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.317028 4718 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.317037 4718 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.317046 4718 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.317055 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.317064 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.317074 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.317083 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.317092 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.320371 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.329353 4718 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.330942 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.344044 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.360654 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.410093 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.425049 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.432622 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:16:57 crc kubenswrapper[4718]: W0123 16:16:57.436265 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-402e4f94d6231bb52e3a80c75ce16f0cf9a60b5ab5b00362c7160939d3c2dca3 WatchSource:0}: Error finding container 402e4f94d6231bb52e3a80c75ce16f0cf9a60b5ab5b00362c7160939d3c2dca3: Status 404 returned error can't find the container with id 402e4f94d6231bb52e3a80c75ce16f0cf9a60b5ab5b00362c7160939d3c2dca3 Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.723670 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.723868 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:16:58.723833465 +0000 UTC m=+19.871075456 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.724020 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.724061 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.724165 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.724213 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:16:58.724200845 +0000 UTC m=+19.871442836 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.724244 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.724263 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:16:58.724257757 +0000 UTC m=+19.871499738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.775836 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.805334 4718 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-23 16:11:56 +0000 UTC, rotation deadline is 2026-11-11 16:38:31.907728466 +0000 UTC Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.805415 4718 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7008h21m34.102316175s for next certificate rotation Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.824812 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:16:57 crc kubenswrapper[4718]: I0123 16:16:57.824859 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.825006 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.825027 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.825039 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.825056 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.825082 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 16:16:58.825069736 +0000 UTC m=+19.972311717 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.825084 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.825102 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:16:57 crc kubenswrapper[4718]: E0123 16:16:57.825177 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 16:16:58.825160659 +0000 UTC m=+19.972402650 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.091276 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:29:59.859647753 +0000 UTC Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.265314 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.267299 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b"} Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.267532 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.269427 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc"} Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.269482 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad"} Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.269497 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"10c48c36ca963fb663eb8f192fcc95eca0ef2c00acbbb9c38897cd3f54c32176"} Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.270614 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9054177b71d76664155ed5b0fd5310a41734da3557230cddbf7f69d3b1843aed"} Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.272320 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478"} Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.272400 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"402e4f94d6231bb52e3a80c75ce16f0cf9a60b5ab5b00362c7160939d3c2dca3"} Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.290989 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:58Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.298556 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.318583 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:58Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.318757 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.328258 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.341787 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:58Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.360527 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:58Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.373595 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:58Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.385822 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:58Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.399766 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:58Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.424009 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:58Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.440204 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:58Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.459885 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:58Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.475458 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:58Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.491089 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:58Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.509352 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:58Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.521773 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:58Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.536257 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:58Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.733839 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.733924 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.733961 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:16:58 crc kubenswrapper[4718]: E0123 16:16:58.734114 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:17:00.734077701 +0000 UTC m=+21.881319692 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:16:58 crc kubenswrapper[4718]: E0123 16:16:58.734199 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:16:58 crc kubenswrapper[4718]: E0123 16:16:58.734199 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:16:58 crc kubenswrapper[4718]: E0123 16:16:58.734249 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:00.734241426 +0000 UTC m=+21.881483417 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:16:58 crc kubenswrapper[4718]: E0123 16:16:58.734367 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:00.734344689 +0000 UTC m=+21.881586690 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.834664 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:16:58 crc kubenswrapper[4718]: I0123 16:16:58.835024 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:16:58 crc kubenswrapper[4718]: E0123 16:16:58.834912 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:16:58 crc kubenswrapper[4718]: E0123 16:16:58.835200 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:16:58 crc kubenswrapper[4718]: E0123 16:16:58.835132 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:16:58 crc kubenswrapper[4718]: E0123 16:16:58.835359 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:16:58 crc kubenswrapper[4718]: E0123 16:16:58.835507 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:16:58 crc kubenswrapper[4718]: E0123 16:16:58.835373 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:16:58 crc kubenswrapper[4718]: E0123 16:16:58.835611 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:00.83558355 +0000 UTC m=+21.982825541 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:16:58 crc kubenswrapper[4718]: E0123 16:16:58.835759 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:00.835748015 +0000 UTC m=+21.982990006 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.010308 4718 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.091972 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:49:46.997199279 +0000 UTC Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.140050 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.140094 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:16:59 crc kubenswrapper[4718]: E0123 16:16:59.140222 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:16:59 crc kubenswrapper[4718]: E0123 16:16:59.140332 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.140392 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:16:59 crc kubenswrapper[4718]: E0123 16:16:59.140462 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.143581 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.144102 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.144915 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.145698 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.146270 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.146776 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.147360 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.147907 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.148496 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.150553 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.151042 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.152080 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.152594 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.153549 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.154151 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.156768 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.157321 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.157737 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.158651 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.159200 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.159623 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.160679 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.160774 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.161108 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.162145 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.162582 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.163651 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.164261 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.164738 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.165691 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.166245 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.167091 4718 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.167190 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.168834 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.169769 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.170196 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.171698 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.172810 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.172795 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.173364 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.174332 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.175073 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.175510 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.176473 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.177521 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.178107 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.178922 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.179425 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.180378 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.181117 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.182044 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.182511 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.182981 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.183862 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.184500 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.185465 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.186014 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-sf9rn"] Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.186369 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bjfr4"] Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.186503 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-x7cc9"] Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.186541 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bjfr4" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.186501 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.187627 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-tb79v"] Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.187955 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.188006 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5qnds"] Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.188121 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.188908 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.189415 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.189547 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.189673 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.189806 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.190581 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.191978 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.191986 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.192145 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.192259 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.192523 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.192615 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.192708 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.192698 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.192819 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.192700 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.193383 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.193490 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.193931 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.194105 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.194785 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.197647 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.197837 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.198059 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238110 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/495f14b9-105b-4d67-ba76-4335df89f346-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238157 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw496\" (UniqueName: \"kubernetes.io/projected/495f14b9-105b-4d67-ba76-4335df89f346-kube-api-access-hw496\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238179 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-multus-cni-dir\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238201 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-host-var-lib-cni-bin\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238221 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-hostroot\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238237 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-node-log\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238251 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-log-socket\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238269 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4985ab62-43a5-4fd8-919c-f9db2eea18f7-ovnkube-script-lib\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238300 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/495f14b9-105b-4d67-ba76-4335df89f346-system-cni-dir\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238322 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/495f14b9-105b-4d67-ba76-4335df89f346-cni-binary-copy\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238345 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbsl8\" (UniqueName: \"kubernetes.io/projected/6e270db3-22b5-4b86-ad17-d6804c7f2d00-kube-api-access-dbsl8\") pod \"node-resolver-bjfr4\" (UID: \"6e270db3-22b5-4b86-ad17-d6804c7f2d00\") " pod="openshift-dns/node-resolver-bjfr4" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238366 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlh4f\" (UniqueName: \"kubernetes.io/projected/d2a07769-1921-4484-b1cd-28b23487bb39-kube-api-access-dlh4f\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238380 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-run-openvswitch\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238399 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6e270db3-22b5-4b86-ad17-d6804c7f2d00-hosts-file\") pod \"node-resolver-bjfr4\" (UID: \"6e270db3-22b5-4b86-ad17-d6804c7f2d00\") " pod="openshift-dns/node-resolver-bjfr4" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238419 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-etc-kubernetes\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238437 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-systemd-units\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238456 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-var-lib-openvswitch\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238475 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/48ad62dc-feb1-4fb1-989b-7830ef9061c2-proxy-tls\") pod \"machine-config-daemon-sf9rn\" (UID: \"48ad62dc-feb1-4fb1-989b-7830ef9061c2\") " pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238493 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/48ad62dc-feb1-4fb1-989b-7830ef9061c2-mcd-auth-proxy-config\") pod \"machine-config-daemon-sf9rn\" (UID: \"48ad62dc-feb1-4fb1-989b-7830ef9061c2\") " pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238386 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238513 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/495f14b9-105b-4d67-ba76-4335df89f346-os-release\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238532 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-host-var-lib-cni-multus\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238551 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238572 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/495f14b9-105b-4d67-ba76-4335df89f346-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238588 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-host-run-k8s-cni-cncf-io\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238605 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qhnq\" (UniqueName: \"kubernetes.io/projected/48ad62dc-feb1-4fb1-989b-7830ef9061c2-kube-api-access-7qhnq\") pod \"machine-config-daemon-sf9rn\" (UID: \"48ad62dc-feb1-4fb1-989b-7830ef9061c2\") " pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238640 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d2a07769-1921-4484-b1cd-28b23487bb39-cni-binary-copy\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238657 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-run-ovn\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238675 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d2a07769-1921-4484-b1cd-28b23487bb39-multus-daemon-config\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238692 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/495f14b9-105b-4d67-ba76-4335df89f346-cnibin\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238709 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-system-cni-dir\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238724 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-multus-conf-dir\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238742 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-etc-openvswitch\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238761 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4985ab62-43a5-4fd8-919c-f9db2eea18f7-ovn-node-metrics-cert\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238779 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-multus-socket-dir-parent\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238799 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-kubelet\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238815 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-slash\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238832 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4985ab62-43a5-4fd8-919c-f9db2eea18f7-env-overrides\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238853 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/48ad62dc-feb1-4fb1-989b-7830ef9061c2-rootfs\") pod \"machine-config-daemon-sf9rn\" (UID: \"48ad62dc-feb1-4fb1-989b-7830ef9061c2\") " pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238872 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-os-release\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238897 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-host-run-netns\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.238952 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-run-netns\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.239136 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-cni-netd\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.239158 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-host-run-multus-certs\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.239176 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4985ab62-43a5-4fd8-919c-f9db2eea18f7-ovnkube-config\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.239209 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsnx5\" (UniqueName: \"kubernetes.io/projected/4985ab62-43a5-4fd8-919c-f9db2eea18f7-kube-api-access-wsnx5\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.239266 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-cnibin\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.239410 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-run-ovn-kubernetes\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.240752 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-cni-bin\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.240793 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-run-systemd\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.240834 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-host-var-lib-kubelet\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.258894 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.279020 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.317033 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.338111 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341373 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-os-release\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341428 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-host-run-netns\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341451 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-run-netns\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341472 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-cni-netd\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341492 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/48ad62dc-feb1-4fb1-989b-7830ef9061c2-rootfs\") pod \"machine-config-daemon-sf9rn\" (UID: \"48ad62dc-feb1-4fb1-989b-7830ef9061c2\") " pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341537 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-host-run-multus-certs\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341564 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-cnibin\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341586 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-run-ovn-kubernetes\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341596 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-host-run-netns\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341660 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-run-netns\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341709 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-cni-netd\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341725 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-host-run-multus-certs\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341612 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-cni-bin\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341684 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-cni-bin\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341766 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4985ab62-43a5-4fd8-919c-f9db2eea18f7-ovnkube-config\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341789 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-cnibin\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341794 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsnx5\" (UniqueName: \"kubernetes.io/projected/4985ab62-43a5-4fd8-919c-f9db2eea18f7-kube-api-access-wsnx5\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341794 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/48ad62dc-feb1-4fb1-989b-7830ef9061c2-rootfs\") pod \"machine-config-daemon-sf9rn\" (UID: \"48ad62dc-feb1-4fb1-989b-7830ef9061c2\") " pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341825 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-run-systemd\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341818 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-run-ovn-kubernetes\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341850 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-host-var-lib-kubelet\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341862 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-os-release\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341886 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/495f14b9-105b-4d67-ba76-4335df89f346-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.341976 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw496\" (UniqueName: \"kubernetes.io/projected/495f14b9-105b-4d67-ba76-4335df89f346-kube-api-access-hw496\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342005 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-multus-cni-dir\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342025 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-host-var-lib-cni-bin\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342062 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-node-log\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342078 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-log-socket\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342095 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4985ab62-43a5-4fd8-919c-f9db2eea18f7-ovnkube-script-lib\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342116 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-hostroot\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342134 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/495f14b9-105b-4d67-ba76-4335df89f346-system-cni-dir\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342152 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/495f14b9-105b-4d67-ba76-4335df89f346-cni-binary-copy\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342173 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbsl8\" (UniqueName: \"kubernetes.io/projected/6e270db3-22b5-4b86-ad17-d6804c7f2d00-kube-api-access-dbsl8\") pod \"node-resolver-bjfr4\" (UID: \"6e270db3-22b5-4b86-ad17-d6804c7f2d00\") " pod="openshift-dns/node-resolver-bjfr4" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342190 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlh4f\" (UniqueName: \"kubernetes.io/projected/d2a07769-1921-4484-b1cd-28b23487bb39-kube-api-access-dlh4f\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342226 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-log-socket\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342239 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6e270db3-22b5-4b86-ad17-d6804c7f2d00-hosts-file\") pod \"node-resolver-bjfr4\" (UID: \"6e270db3-22b5-4b86-ad17-d6804c7f2d00\") " pod="openshift-dns/node-resolver-bjfr4" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342266 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-etc-kubernetes\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342285 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-systemd-units\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342308 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-var-lib-openvswitch\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342330 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-run-openvswitch\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342355 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/495f14b9-105b-4d67-ba76-4335df89f346-os-release\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342389 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-host-var-lib-cni-multus\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342417 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342445 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/48ad62dc-feb1-4fb1-989b-7830ef9061c2-proxy-tls\") pod \"machine-config-daemon-sf9rn\" (UID: \"48ad62dc-feb1-4fb1-989b-7830ef9061c2\") " pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342468 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/48ad62dc-feb1-4fb1-989b-7830ef9061c2-mcd-auth-proxy-config\") pod \"machine-config-daemon-sf9rn\" (UID: \"48ad62dc-feb1-4fb1-989b-7830ef9061c2\") " pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342471 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-multus-cni-dir\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342497 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/495f14b9-105b-4d67-ba76-4335df89f346-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342512 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-host-var-lib-cni-bin\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342516 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-host-run-k8s-cni-cncf-io\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342538 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-host-run-k8s-cni-cncf-io\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342545 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4985ab62-43a5-4fd8-919c-f9db2eea18f7-ovnkube-config\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342573 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qhnq\" (UniqueName: \"kubernetes.io/projected/48ad62dc-feb1-4fb1-989b-7830ef9061c2-kube-api-access-7qhnq\") pod \"machine-config-daemon-sf9rn\" (UID: \"48ad62dc-feb1-4fb1-989b-7830ef9061c2\") " pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342591 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6e270db3-22b5-4b86-ad17-d6804c7f2d00-hosts-file\") pod \"node-resolver-bjfr4\" (UID: \"6e270db3-22b5-4b86-ad17-d6804c7f2d00\") " pod="openshift-dns/node-resolver-bjfr4" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342599 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-run-ovn\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342617 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-etc-kubernetes\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342639 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d2a07769-1921-4484-b1cd-28b23487bb39-cni-binary-copy\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342659 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-systemd-units\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342663 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d2a07769-1921-4484-b1cd-28b23487bb39-multus-daemon-config\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342686 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-system-cni-dir\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342688 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-var-lib-openvswitch\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342291 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/495f14b9-105b-4d67-ba76-4335df89f346-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342708 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/495f14b9-105b-4d67-ba76-4335df89f346-cnibin\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342727 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-run-openvswitch\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342732 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-etc-openvswitch\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342754 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4985ab62-43a5-4fd8-919c-f9db2eea18f7-ovn-node-metrics-cert\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342767 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/495f14b9-105b-4d67-ba76-4335df89f346-os-release\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342784 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-multus-conf-dir\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342798 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-host-var-lib-cni-multus\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342824 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-kubelet\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342833 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342845 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-slash\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342870 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4985ab62-43a5-4fd8-919c-f9db2eea18f7-env-overrides\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342891 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-multus-socket-dir-parent\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342913 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/495f14b9-105b-4d67-ba76-4335df89f346-system-cni-dir\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342966 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-node-log\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.342973 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/495f14b9-105b-4d67-ba76-4335df89f346-cnibin\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.343134 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-etc-openvswitch\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.343458 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4985ab62-43a5-4fd8-919c-f9db2eea18f7-ovnkube-script-lib\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.343538 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-slash\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.343603 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-multus-conf-dir\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.343647 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-kubelet\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.343660 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-run-systemd\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.343721 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-multus-socket-dir-parent\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.343742 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-hostroot\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.343819 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-host-var-lib-kubelet\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.343934 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d2a07769-1921-4484-b1cd-28b23487bb39-system-cni-dir\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.343946 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d2a07769-1921-4484-b1cd-28b23487bb39-cni-binary-copy\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.343952 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-run-ovn\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.344211 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4985ab62-43a5-4fd8-919c-f9db2eea18f7-env-overrides\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.344495 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/495f14b9-105b-4d67-ba76-4335df89f346-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.344578 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/495f14b9-105b-4d67-ba76-4335df89f346-cni-binary-copy\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.344599 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/48ad62dc-feb1-4fb1-989b-7830ef9061c2-mcd-auth-proxy-config\") pod \"machine-config-daemon-sf9rn\" (UID: \"48ad62dc-feb1-4fb1-989b-7830ef9061c2\") " pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.344864 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d2a07769-1921-4484-b1cd-28b23487bb39-multus-daemon-config\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.356933 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4985ab62-43a5-4fd8-919c-f9db2eea18f7-ovn-node-metrics-cert\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.358765 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/48ad62dc-feb1-4fb1-989b-7830ef9061c2-proxy-tls\") pod \"machine-config-daemon-sf9rn\" (UID: \"48ad62dc-feb1-4fb1-989b-7830ef9061c2\") " pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.383483 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlh4f\" (UniqueName: \"kubernetes.io/projected/d2a07769-1921-4484-b1cd-28b23487bb39-kube-api-access-dlh4f\") pod \"multus-tb79v\" (UID: \"d2a07769-1921-4484-b1cd-28b23487bb39\") " pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.389751 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbsl8\" (UniqueName: \"kubernetes.io/projected/6e270db3-22b5-4b86-ad17-d6804c7f2d00-kube-api-access-dbsl8\") pod \"node-resolver-bjfr4\" (UID: \"6e270db3-22b5-4b86-ad17-d6804c7f2d00\") " pod="openshift-dns/node-resolver-bjfr4" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.393854 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw496\" (UniqueName: \"kubernetes.io/projected/495f14b9-105b-4d67-ba76-4335df89f346-kube-api-access-hw496\") pod \"multus-additional-cni-plugins-x7cc9\" (UID: \"495f14b9-105b-4d67-ba76-4335df89f346\") " pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.394564 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsnx5\" (UniqueName: \"kubernetes.io/projected/4985ab62-43a5-4fd8-919c-f9db2eea18f7-kube-api-access-wsnx5\") pod \"ovnkube-node-5qnds\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.395707 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qhnq\" (UniqueName: \"kubernetes.io/projected/48ad62dc-feb1-4fb1-989b-7830ef9061c2-kube-api-access-7qhnq\") pod \"machine-config-daemon-sf9rn\" (UID: \"48ad62dc-feb1-4fb1-989b-7830ef9061c2\") " pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.411571 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.450240 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.478647 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.498258 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.508378 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bjfr4" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.514595 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.516750 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.525184 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" Jan 23 16:16:59 crc kubenswrapper[4718]: W0123 16:16:59.532924 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48ad62dc_feb1_4fb1_989b_7830ef9061c2.slice/crio-4bef5483759a1be20f8f64c8e2546ad9d593c22ffc2c2d32eb0d02cbb2fd28be WatchSource:0}: Error finding container 4bef5483759a1be20f8f64c8e2546ad9d593c22ffc2c2d32eb0d02cbb2fd28be: Status 404 returned error can't find the container with id 4bef5483759a1be20f8f64c8e2546ad9d593c22ffc2c2d32eb0d02cbb2fd28be Jan 23 16:16:59 crc kubenswrapper[4718]: W0123 16:16:59.534169 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e270db3_22b5_4b86_ad17_d6804c7f2d00.slice/crio-efcb6161b291e7857240a6e8b0cb8d8a55a224be1ac3c09db513c57380c600a8 WatchSource:0}: Error finding container efcb6161b291e7857240a6e8b0cb8d8a55a224be1ac3c09db513c57380c600a8: Status 404 returned error can't find the container with id efcb6161b291e7857240a6e8b0cb8d8a55a224be1ac3c09db513c57380c600a8 Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.538863 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tb79v" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.543005 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.555678 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: W0123 16:16:59.560190 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2a07769_1921_4484_b1cd_28b23487bb39.slice/crio-0aaebf6d9662863431d70d0a5bc43b5f7d85ac6e7df8fa3ea579874796ab50b3 WatchSource:0}: Error finding container 0aaebf6d9662863431d70d0a5bc43b5f7d85ac6e7df8fa3ea579874796ab50b3: Status 404 returned error can't find the container with id 0aaebf6d9662863431d70d0a5bc43b5f7d85ac6e7df8fa3ea579874796ab50b3 Jan 23 16:16:59 crc kubenswrapper[4718]: W0123 16:16:59.566785 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4985ab62_43a5_4fd8_919c_f9db2eea18f7.slice/crio-73c5172b533b332ddb7430b2f83bbba202faeb4fa77a0f449a4a300a4dd60eae WatchSource:0}: Error finding container 73c5172b533b332ddb7430b2f83bbba202faeb4fa77a0f449a4a300a4dd60eae: Status 404 returned error can't find the container with id 73c5172b533b332ddb7430b2f83bbba202faeb4fa77a0f449a4a300a4dd60eae Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.570643 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.585531 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.600788 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.625005 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.649831 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.666649 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:16:59 crc kubenswrapper[4718]: I0123 16:16:59.679279 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:16:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.092745 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:21:07.296881091 +0000 UTC Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.281171 4718 generic.go:334] "Generic (PLEG): container finished" podID="495f14b9-105b-4d67-ba76-4335df89f346" containerID="2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18" exitCode=0 Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.281462 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" event={"ID":"495f14b9-105b-4d67-ba76-4335df89f346","Type":"ContainerDied","Data":"2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.281600 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" event={"ID":"495f14b9-105b-4d67-ba76-4335df89f346","Type":"ContainerStarted","Data":"649dd3fc4a72e479d8c45959111c6c35878b3c35f08070a3bc58281dc4b9e5a3"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.284003 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bjfr4" event={"ID":"6e270db3-22b5-4b86-ad17-d6804c7f2d00","Type":"ContainerStarted","Data":"8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.284029 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bjfr4" event={"ID":"6e270db3-22b5-4b86-ad17-d6804c7f2d00","Type":"ContainerStarted","Data":"efcb6161b291e7857240a6e8b0cb8d8a55a224be1ac3c09db513c57380c600a8"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.286067 4718 generic.go:334] "Generic (PLEG): container finished" podID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerID="081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48" exitCode=0 Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.286146 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerDied","Data":"081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.286174 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerStarted","Data":"73c5172b533b332ddb7430b2f83bbba202faeb4fa77a0f449a4a300a4dd60eae"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.288274 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tb79v" event={"ID":"d2a07769-1921-4484-b1cd-28b23487bb39","Type":"ContainerStarted","Data":"7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.288300 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tb79v" event={"ID":"d2a07769-1921-4484-b1cd-28b23487bb39","Type":"ContainerStarted","Data":"0aaebf6d9662863431d70d0a5bc43b5f7d85ac6e7df8fa3ea579874796ab50b3"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.291563 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.291598 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.291613 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"4bef5483759a1be20f8f64c8e2546ad9d593c22ffc2c2d32eb0d02cbb2fd28be"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.295364 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.307802 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.327046 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.350463 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.371267 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.385964 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.407536 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.421205 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.439010 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.455717 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.468977 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.482072 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.495534 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.522217 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.529607 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.531975 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.532032 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.532043 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.532196 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.538437 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.541919 4718 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.542329 4718 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.543651 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.543701 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.543717 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.543740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.543754 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:00Z","lastTransitionTime":"2026-01-23T16:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.558442 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.562320 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.565850 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.565889 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.565899 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.565917 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.565930 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:00Z","lastTransitionTime":"2026-01-23T16:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.576705 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.583814 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.588475 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.588732 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.588915 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.589015 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.589095 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:00Z","lastTransitionTime":"2026-01-23T16:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.601065 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.605348 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.611184 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.611247 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.611261 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.611285 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.611299 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:00Z","lastTransitionTime":"2026-01-23T16:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.619900 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.637212 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.641163 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.647771 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.647825 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.647835 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.647855 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.647867 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:00Z","lastTransitionTime":"2026-01-23T16:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.652316 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.666150 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.666285 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.666601 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.668916 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.668953 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.668961 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.668980 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.668991 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:00Z","lastTransitionTime":"2026-01-23T16:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.682310 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.696204 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.727862 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.756431 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.761341 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.761438 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.761481 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.761575 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.761649 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:04.761614936 +0000 UTC m=+25.908856927 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.761707 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:17:04.761701198 +0000 UTC m=+25.908943189 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.761774 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.761794 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:04.761789101 +0000 UTC m=+25.909031092 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.780885 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.780944 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.780958 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.780981 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.780995 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:00Z","lastTransitionTime":"2026-01-23T16:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.810113 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.862606 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.862708 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.862859 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.862901 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.862915 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.862962 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.862987 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.863007 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:04.86296302 +0000 UTC m=+26.010205011 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.863011 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:17:00 crc kubenswrapper[4718]: E0123 16:17:00.863080 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:04.863062803 +0000 UTC m=+26.010305004 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.883807 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.883846 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.883855 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.883872 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.883883 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:00Z","lastTransitionTime":"2026-01-23T16:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.987825 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.987871 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.987881 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.987900 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:00 crc kubenswrapper[4718]: I0123 16:17:00.987911 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:00Z","lastTransitionTime":"2026-01-23T16:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.091697 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.092109 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.092118 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.092134 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.092145 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:01Z","lastTransitionTime":"2026-01-23T16:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.093892 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 11:22:45.487478017 +0000 UTC Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.139833 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:01 crc kubenswrapper[4718]: E0123 16:17:01.139985 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.140095 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.140160 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:01 crc kubenswrapper[4718]: E0123 16:17:01.140401 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:01 crc kubenswrapper[4718]: E0123 16:17:01.140500 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.194516 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.194586 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.194606 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.194663 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.194682 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:01Z","lastTransitionTime":"2026-01-23T16:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.298617 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.298721 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.298739 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.298768 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.298788 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:01Z","lastTransitionTime":"2026-01-23T16:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.303728 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerStarted","Data":"ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da"} Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.303796 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerStarted","Data":"6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156"} Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.303813 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerStarted","Data":"d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19"} Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.303829 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerStarted","Data":"d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521"} Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.303842 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerStarted","Data":"86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6"} Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.303854 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerStarted","Data":"ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264"} Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.305805 4718 generic.go:334] "Generic (PLEG): container finished" podID="495f14b9-105b-4d67-ba76-4335df89f346" containerID="d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0" exitCode=0 Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.305910 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" event={"ID":"495f14b9-105b-4d67-ba76-4335df89f346","Type":"ContainerDied","Data":"d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0"} Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.327556 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.344162 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.364543 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.378206 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.391413 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.406080 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.406161 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.406176 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.406217 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.406233 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:01Z","lastTransitionTime":"2026-01-23T16:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.413537 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.426341 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.436979 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.450894 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.472910 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.493789 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.509258 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.509298 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.509311 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.509331 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.509345 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:01Z","lastTransitionTime":"2026-01-23T16:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.513940 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.528539 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.612668 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.612914 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.613010 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.613095 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.613155 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:01Z","lastTransitionTime":"2026-01-23T16:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.716387 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.716774 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.716858 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.716945 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.717007 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:01Z","lastTransitionTime":"2026-01-23T16:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.793245 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-jk97t"] Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.793707 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jk97t" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.795888 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.795920 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.796047 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.797155 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.810325 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.819725 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.819772 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.819786 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.819806 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.819821 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:01Z","lastTransitionTime":"2026-01-23T16:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.827156 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.840549 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.854351 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.866064 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.872536 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e6ffd18-7477-43f8-878a-2cc5849bc796-host\") pod \"node-ca-jk97t\" (UID: \"4e6ffd18-7477-43f8-878a-2cc5849bc796\") " pod="openshift-image-registry/node-ca-jk97t" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.872577 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4e6ffd18-7477-43f8-878a-2cc5849bc796-serviceca\") pod \"node-ca-jk97t\" (UID: \"4e6ffd18-7477-43f8-878a-2cc5849bc796\") " pod="openshift-image-registry/node-ca-jk97t" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.872601 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-292s8\" (UniqueName: \"kubernetes.io/projected/4e6ffd18-7477-43f8-878a-2cc5849bc796-kube-api-access-292s8\") pod \"node-ca-jk97t\" (UID: \"4e6ffd18-7477-43f8-878a-2cc5849bc796\") " pod="openshift-image-registry/node-ca-jk97t" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.882356 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.895128 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.920556 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.923131 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.923170 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.923184 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.923205 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.923220 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:01Z","lastTransitionTime":"2026-01-23T16:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.934985 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.947123 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.962448 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.973755 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-292s8\" (UniqueName: \"kubernetes.io/projected/4e6ffd18-7477-43f8-878a-2cc5849bc796-kube-api-access-292s8\") pod \"node-ca-jk97t\" (UID: \"4e6ffd18-7477-43f8-878a-2cc5849bc796\") " pod="openshift-image-registry/node-ca-jk97t" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.974078 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e6ffd18-7477-43f8-878a-2cc5849bc796-host\") pod \"node-ca-jk97t\" (UID: \"4e6ffd18-7477-43f8-878a-2cc5849bc796\") " pod="openshift-image-registry/node-ca-jk97t" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.974369 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4e6ffd18-7477-43f8-878a-2cc5849bc796-serviceca\") pod \"node-ca-jk97t\" (UID: \"4e6ffd18-7477-43f8-878a-2cc5849bc796\") " pod="openshift-image-registry/node-ca-jk97t" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.974269 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e6ffd18-7477-43f8-878a-2cc5849bc796-host\") pod \"node-ca-jk97t\" (UID: \"4e6ffd18-7477-43f8-878a-2cc5849bc796\") " pod="openshift-image-registry/node-ca-jk97t" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.976163 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4e6ffd18-7477-43f8-878a-2cc5849bc796-serviceca\") pod \"node-ca-jk97t\" (UID: \"4e6ffd18-7477-43f8-878a-2cc5849bc796\") " pod="openshift-image-registry/node-ca-jk97t" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.982334 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:01 crc kubenswrapper[4718]: I0123 16:17:01.995308 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.001862 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-292s8\" (UniqueName: \"kubernetes.io/projected/4e6ffd18-7477-43f8-878a-2cc5849bc796-kube-api-access-292s8\") pod \"node-ca-jk97t\" (UID: \"4e6ffd18-7477-43f8-878a-2cc5849bc796\") " pod="openshift-image-registry/node-ca-jk97t" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.020341 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:02Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.026733 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.026805 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.026819 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.026837 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.026848 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:02Z","lastTransitionTime":"2026-01-23T16:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.094715 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 09:09:23.580122368 +0000 UTC Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.110020 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jk97t" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.129624 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.129778 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.129846 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.129912 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.129974 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:02Z","lastTransitionTime":"2026-01-23T16:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:02 crc kubenswrapper[4718]: W0123 16:17:02.132902 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e6ffd18_7477_43f8_878a_2cc5849bc796.slice/crio-ea46378c1841ab743b4bf63bc04dbeefc2e09e3585500cf7ff63befb051fa9d3 WatchSource:0}: Error finding container ea46378c1841ab743b4bf63bc04dbeefc2e09e3585500cf7ff63befb051fa9d3: Status 404 returned error can't find the container with id ea46378c1841ab743b4bf63bc04dbeefc2e09e3585500cf7ff63befb051fa9d3 Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.232909 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.233138 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.233150 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.233180 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.233192 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:02Z","lastTransitionTime":"2026-01-23T16:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.315356 4718 generic.go:334] "Generic (PLEG): container finished" podID="495f14b9-105b-4d67-ba76-4335df89f346" containerID="1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f" exitCode=0 Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.315443 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" event={"ID":"495f14b9-105b-4d67-ba76-4335df89f346","Type":"ContainerDied","Data":"1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f"} Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.325473 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jk97t" event={"ID":"4e6ffd18-7477-43f8-878a-2cc5849bc796","Type":"ContainerStarted","Data":"ea46378c1841ab743b4bf63bc04dbeefc2e09e3585500cf7ff63befb051fa9d3"} Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.330401 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:02Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.342269 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.342341 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.342360 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.342386 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.342401 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:02Z","lastTransitionTime":"2026-01-23T16:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.344542 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:02Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.364934 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:02Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.390206 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:02Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.409373 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:02Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.423005 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:02Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.442874 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:02Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.446441 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.446498 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.446510 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.446531 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.446542 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:02Z","lastTransitionTime":"2026-01-23T16:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.454623 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:02Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.467505 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:02Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.478397 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:02Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.491137 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:02Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.512325 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:02Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.554927 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.555245 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.555348 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.555420 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.555478 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:02Z","lastTransitionTime":"2026-01-23T16:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.569132 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:02Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.586662 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:02Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.657891 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.657937 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.657950 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.657968 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.657980 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:02Z","lastTransitionTime":"2026-01-23T16:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.761796 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.761883 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.761902 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.761934 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.761956 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:02Z","lastTransitionTime":"2026-01-23T16:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.866017 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.866091 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.866108 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.866136 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.866155 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:02Z","lastTransitionTime":"2026-01-23T16:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.969814 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.969883 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.969904 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.969936 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.969956 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:02Z","lastTransitionTime":"2026-01-23T16:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:02 crc kubenswrapper[4718]: I0123 16:17:02.993399 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.000393 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.007856 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.012511 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.046946 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.071952 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.073933 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.074004 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.074025 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.074234 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.074256 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:03Z","lastTransitionTime":"2026-01-23T16:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.094235 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.095151 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 17:20:30.669981371 +0000 UTC Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.133242 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.139779 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.139858 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.139923 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:03 crc kubenswrapper[4718]: E0123 16:17:03.140094 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:03 crc kubenswrapper[4718]: E0123 16:17:03.140242 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:03 crc kubenswrapper[4718]: E0123 16:17:03.140491 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.159088 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.177446 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.177512 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.177530 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.177557 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.177577 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:03Z","lastTransitionTime":"2026-01-23T16:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.185783 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.206073 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.231821 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.253370 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.272424 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.280089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.280152 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.280169 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.280197 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.280217 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:03Z","lastTransitionTime":"2026-01-23T16:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.296858 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.319822 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.335689 4718 generic.go:334] "Generic (PLEG): container finished" podID="495f14b9-105b-4d67-ba76-4335df89f346" containerID="d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322" exitCode=0 Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.335774 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" event={"ID":"495f14b9-105b-4d67-ba76-4335df89f346","Type":"ContainerDied","Data":"d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322"} Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.340577 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jk97t" event={"ID":"4e6ffd18-7477-43f8-878a-2cc5849bc796","Type":"ContainerStarted","Data":"a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae"} Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.352453 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.370177 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.382854 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.382925 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.382945 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.382974 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.382993 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:03Z","lastTransitionTime":"2026-01-23T16:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.390445 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.407832 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.427902 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.446919 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.462515 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.486278 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.486312 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.486321 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.486336 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.486345 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:03Z","lastTransitionTime":"2026-01-23T16:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.489588 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.509854 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.525542 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.543662 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.558771 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.574717 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.588103 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.588321 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.588353 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.588364 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.588385 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.588399 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:03Z","lastTransitionTime":"2026-01-23T16:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.604924 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.623039 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.691325 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.691374 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.691387 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.691410 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.691420 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:03Z","lastTransitionTime":"2026-01-23T16:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.795296 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.795653 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.795665 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.795680 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.795690 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:03Z","lastTransitionTime":"2026-01-23T16:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.898893 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.898953 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.898969 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.898995 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:03 crc kubenswrapper[4718]: I0123 16:17:03.899013 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:03Z","lastTransitionTime":"2026-01-23T16:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.001743 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.001807 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.001828 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.001854 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.001870 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:04Z","lastTransitionTime":"2026-01-23T16:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.095385 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 23:27:52.995627158 +0000 UTC Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.104711 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.104778 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.104793 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.104837 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.104857 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:04Z","lastTransitionTime":"2026-01-23T16:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.229474 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.229531 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.229543 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.229565 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.229578 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:04Z","lastTransitionTime":"2026-01-23T16:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.333312 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.334020 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.334109 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.334243 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.334331 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:04Z","lastTransitionTime":"2026-01-23T16:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.351545 4718 generic.go:334] "Generic (PLEG): container finished" podID="495f14b9-105b-4d67-ba76-4335df89f346" containerID="2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3" exitCode=0 Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.351668 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" event={"ID":"495f14b9-105b-4d67-ba76-4335df89f346","Type":"ContainerDied","Data":"2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3"} Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.361698 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerStarted","Data":"3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9"} Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.373164 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.394810 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.412901 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.436728 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.437225 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.437247 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.437255 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.437273 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.437284 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:04Z","lastTransitionTime":"2026-01-23T16:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.451041 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.476853 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.493668 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.509561 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.524734 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.540725 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.540780 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.540812 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.540838 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.540854 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:04Z","lastTransitionTime":"2026-01-23T16:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.541939 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.557279 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.577553 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.591569 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.606463 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.633135 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.643421 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.643455 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.643467 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.643484 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.643496 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:04Z","lastTransitionTime":"2026-01-23T16:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.747183 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.747259 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.747284 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.747316 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.747340 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:04Z","lastTransitionTime":"2026-01-23T16:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.808430 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.808690 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:04 crc kubenswrapper[4718]: E0123 16:17:04.808734 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:17:12.808693264 +0000 UTC m=+33.955935305 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.808853 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:04 crc kubenswrapper[4718]: E0123 16:17:04.808853 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:17:04 crc kubenswrapper[4718]: E0123 16:17:04.808990 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:17:04 crc kubenswrapper[4718]: E0123 16:17:04.809026 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:12.809001243 +0000 UTC m=+33.956243294 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:17:04 crc kubenswrapper[4718]: E0123 16:17:04.809116 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:12.809082425 +0000 UTC m=+33.956324456 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.850484 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.850571 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.850592 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.850619 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.850692 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:04Z","lastTransitionTime":"2026-01-23T16:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.910402 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.910460 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:04 crc kubenswrapper[4718]: E0123 16:17:04.910616 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:17:04 crc kubenswrapper[4718]: E0123 16:17:04.910672 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:17:04 crc kubenswrapper[4718]: E0123 16:17:04.910685 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:17:04 crc kubenswrapper[4718]: E0123 16:17:04.910745 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:12.910727419 +0000 UTC m=+34.057969410 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:17:04 crc kubenswrapper[4718]: E0123 16:17:04.910794 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:17:04 crc kubenswrapper[4718]: E0123 16:17:04.910837 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:17:04 crc kubenswrapper[4718]: E0123 16:17:04.910859 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:17:04 crc kubenswrapper[4718]: E0123 16:17:04.910956 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:12.910924855 +0000 UTC m=+34.058166886 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.953940 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.953981 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.953995 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.954013 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:04 crc kubenswrapper[4718]: I0123 16:17:04.954024 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:04Z","lastTransitionTime":"2026-01-23T16:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.057478 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.057537 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.057556 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.057581 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.057598 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:05Z","lastTransitionTime":"2026-01-23T16:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.096261 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:55:24.590562939 +0000 UTC Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.140038 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.140125 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.140160 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:05 crc kubenswrapper[4718]: E0123 16:17:05.140257 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:05 crc kubenswrapper[4718]: E0123 16:17:05.140496 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:05 crc kubenswrapper[4718]: E0123 16:17:05.140680 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.160919 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.161001 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.161033 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.161249 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.161279 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:05Z","lastTransitionTime":"2026-01-23T16:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.264962 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.265094 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.265113 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.265141 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.265192 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:05Z","lastTransitionTime":"2026-01-23T16:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.367840 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.367933 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.367953 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.367981 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.368000 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:05Z","lastTransitionTime":"2026-01-23T16:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.372367 4718 generic.go:334] "Generic (PLEG): container finished" podID="495f14b9-105b-4d67-ba76-4335df89f346" containerID="d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5" exitCode=0 Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.372442 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" event={"ID":"495f14b9-105b-4d67-ba76-4335df89f346","Type":"ContainerDied","Data":"d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5"} Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.397487 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:05Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.408917 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:05Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.431734 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:05Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.450998 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:05Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.467026 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:05Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.472343 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.472438 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.472520 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.472613 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.472713 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:05Z","lastTransitionTime":"2026-01-23T16:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.480450 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:05Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.492964 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:05Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.506094 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:05Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.525563 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:05Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.538550 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:05Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.554503 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:05Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.566767 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:05Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.575451 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.575471 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.575479 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.575493 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.575502 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:05Z","lastTransitionTime":"2026-01-23T16:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.587450 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:05Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.601047 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:05Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.615595 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:05Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.678194 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.678271 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.678293 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.678321 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.678339 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:05Z","lastTransitionTime":"2026-01-23T16:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.781286 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.781687 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.781800 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.781944 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.782060 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:05Z","lastTransitionTime":"2026-01-23T16:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.886263 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.886332 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.886352 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.886382 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.886403 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:05Z","lastTransitionTime":"2026-01-23T16:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.990462 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.990527 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.990539 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.990565 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:05 crc kubenswrapper[4718]: I0123 16:17:05.990578 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:05Z","lastTransitionTime":"2026-01-23T16:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.094090 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.094136 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.094147 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.094214 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.094227 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:06Z","lastTransitionTime":"2026-01-23T16:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.096448 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 20:23:31.51932664 +0000 UTC Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.197482 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.197523 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.197537 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.197558 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.197570 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:06Z","lastTransitionTime":"2026-01-23T16:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.301182 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.301252 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.301275 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.301307 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.301326 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:06Z","lastTransitionTime":"2026-01-23T16:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.382604 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerStarted","Data":"8e86e765e6725af39aba60c8ed74ca97924a0c7d063fe423dba8d71a88854376"} Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.383149 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.383173 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.383185 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.392550 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" event={"ID":"495f14b9-105b-4d67-ba76-4335df89f346","Type":"ContainerStarted","Data":"6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463"} Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.412019 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.412089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.412102 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.412123 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.412135 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:06Z","lastTransitionTime":"2026-01-23T16:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.412484 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.416122 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.417614 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.428325 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.458394 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.488142 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.505214 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.515453 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.515510 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.515520 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.515552 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.515566 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:06Z","lastTransitionTime":"2026-01-23T16:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.519332 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.533559 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.546796 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.564846 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.576300 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.588769 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.601800 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.617378 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.618269 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.618318 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.618333 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.618355 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.618372 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:06Z","lastTransitionTime":"2026-01-23T16:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.630976 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.662014 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e86e765e6725af39aba60c8ed74ca97924a0c7d063fe423dba8d71a88854376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.683179 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.697584 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.721753 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.721791 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.721820 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.721837 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.721849 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:06Z","lastTransitionTime":"2026-01-23T16:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.723079 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.739053 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.753517 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.772054 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.791266 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.803846 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.821509 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.824694 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.824728 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.824740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.824756 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.824769 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:06Z","lastTransitionTime":"2026-01-23T16:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.841543 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.856619 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.867429 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.882922 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.894256 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.916518 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e86e765e6725af39aba60c8ed74ca97924a0c7d063fe423dba8d71a88854376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.927810 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.927858 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.927871 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.927891 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:06 crc kubenswrapper[4718]: I0123 16:17:06.927903 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:06Z","lastTransitionTime":"2026-01-23T16:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.031153 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.031207 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.031220 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.031242 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.031263 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:07Z","lastTransitionTime":"2026-01-23T16:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.096673 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:26:38.028238306 +0000 UTC Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.139619 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.139757 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.139668 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:07 crc kubenswrapper[4718]: E0123 16:17:07.140138 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:07 crc kubenswrapper[4718]: E0123 16:17:07.140268 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:07 crc kubenswrapper[4718]: E0123 16:17:07.139951 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.141205 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.142121 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.142151 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.142174 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.142189 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:07Z","lastTransitionTime":"2026-01-23T16:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.245443 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.245492 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.245505 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.245556 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.245570 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:07Z","lastTransitionTime":"2026-01-23T16:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.348679 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.348752 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.348770 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.348799 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.348819 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:07Z","lastTransitionTime":"2026-01-23T16:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.452109 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.452215 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.452233 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.452260 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.452280 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:07Z","lastTransitionTime":"2026-01-23T16:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.556238 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.556294 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.556306 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.556332 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.556345 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:07Z","lastTransitionTime":"2026-01-23T16:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.659994 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.660066 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.660086 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.660118 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.660138 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:07Z","lastTransitionTime":"2026-01-23T16:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.763846 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.763911 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.763928 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.763954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.763971 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:07Z","lastTransitionTime":"2026-01-23T16:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.867801 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.867845 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.867857 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.867874 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.867888 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:07Z","lastTransitionTime":"2026-01-23T16:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.970699 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.971044 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.971160 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.971252 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:07 crc kubenswrapper[4718]: I0123 16:17:07.971378 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:07Z","lastTransitionTime":"2026-01-23T16:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.074623 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.074948 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.075009 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.075071 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.075135 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:08Z","lastTransitionTime":"2026-01-23T16:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.096922 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 19:31:24.725805546 +0000 UTC Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.178366 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.178412 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.178422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.178438 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.178451 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:08Z","lastTransitionTime":"2026-01-23T16:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.281380 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.281653 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.281725 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.281807 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.281870 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:08Z","lastTransitionTime":"2026-01-23T16:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.340821 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.360016 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:08Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.379225 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:08Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.385060 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.385129 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.385147 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.385234 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.385265 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:08Z","lastTransitionTime":"2026-01-23T16:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.404865 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:08Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.421109 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:08Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.436022 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:08Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.458274 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:08Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.487386 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.487759 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.487844 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.487928 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.487985 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:08Z","lastTransitionTime":"2026-01-23T16:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.511921 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e86e765e6725af39aba60c8ed74ca97924a0c7d063fe423dba8d71a88854376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:08Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.526899 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:08Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.538760 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:08Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.552106 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:08Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.585909 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:08Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.590483 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.590526 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.590546 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.590572 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.590590 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:08Z","lastTransitionTime":"2026-01-23T16:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.602282 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:08Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.616697 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:08Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.635700 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:08Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.649398 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:08Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.693726 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.693794 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.693809 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.693833 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.693847 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:08Z","lastTransitionTime":"2026-01-23T16:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.797317 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.797385 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.797404 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.797433 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.797453 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:08Z","lastTransitionTime":"2026-01-23T16:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.900722 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.900754 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.900762 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.900778 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:08 crc kubenswrapper[4718]: I0123 16:17:08.900789 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:08Z","lastTransitionTime":"2026-01-23T16:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.004102 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.004171 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.004191 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.004219 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.004237 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:09Z","lastTransitionTime":"2026-01-23T16:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.097804 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:07:40.766506093 +0000 UTC Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.107980 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.108049 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.108068 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.108097 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.108119 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:09Z","lastTransitionTime":"2026-01-23T16:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.139686 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.139764 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:09 crc kubenswrapper[4718]: E0123 16:17:09.139854 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.139778 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:09 crc kubenswrapper[4718]: E0123 16:17:09.140010 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:09 crc kubenswrapper[4718]: E0123 16:17:09.140088 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.164543 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.184397 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.211689 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.211738 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.211755 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.211779 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.211801 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:09Z","lastTransitionTime":"2026-01-23T16:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.232145 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.252208 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.269161 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.285305 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.303487 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.314150 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.314225 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.314244 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.314270 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.314289 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:09Z","lastTransitionTime":"2026-01-23T16:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.314712 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.330369 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.342428 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.358244 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.377428 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.392341 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.405803 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovnkube-controller/0.log" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.406858 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.409548 4718 generic.go:334] "Generic (PLEG): container finished" podID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerID="8e86e765e6725af39aba60c8ed74ca97924a0c7d063fe423dba8d71a88854376" exitCode=1 Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.409604 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerDied","Data":"8e86e765e6725af39aba60c8ed74ca97924a0c7d063fe423dba8d71a88854376"} Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.411611 4718 scope.go:117] "RemoveContainer" containerID="8e86e765e6725af39aba60c8ed74ca97924a0c7d063fe423dba8d71a88854376" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.419541 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.419585 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.419599 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.419619 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.419655 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:09Z","lastTransitionTime":"2026-01-23T16:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.435609 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e86e765e6725af39aba60c8ed74ca97924a0c7d063fe423dba8d71a88854376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.454002 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.471139 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.492555 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.508809 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.521602 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.522083 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.522121 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.522141 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.522162 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.522175 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:09Z","lastTransitionTime":"2026-01-23T16:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.537620 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.555527 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.571807 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.585704 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.602072 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.615928 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.625558 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.625600 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.625615 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.625670 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.625689 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:09Z","lastTransitionTime":"2026-01-23T16:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.630724 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.652009 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e86e765e6725af39aba60c8ed74ca97924a0c7d063fe423dba8d71a88854376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e86e765e6725af39aba60c8ed74ca97924a0c7d063fe423dba8d71a88854376\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"message\\\":\\\"ewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:08.547921 6018 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 16:17:08.547974 6018 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.548087 6018 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:08.548367 6018 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.548527 6018 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.548696 6018 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.549032 6018 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.549110 6018 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.668297 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.683109 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.728452 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.728525 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.728540 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.728588 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.728603 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:09Z","lastTransitionTime":"2026-01-23T16:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.830910 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.830957 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.830966 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.830981 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.830991 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:09Z","lastTransitionTime":"2026-01-23T16:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.934172 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.934219 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.934228 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.934242 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:09 crc kubenswrapper[4718]: I0123 16:17:09.934257 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:09Z","lastTransitionTime":"2026-01-23T16:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.036864 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.036932 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.036945 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.036968 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.036979 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:10Z","lastTransitionTime":"2026-01-23T16:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.098818 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 09:29:37.194412689 +0000 UTC Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.139678 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.139726 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.139739 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.139759 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.139770 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:10Z","lastTransitionTime":"2026-01-23T16:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.241797 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.241865 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.241876 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.241893 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.241903 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:10Z","lastTransitionTime":"2026-01-23T16:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.344904 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.344973 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.344986 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.345005 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.345017 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:10Z","lastTransitionTime":"2026-01-23T16:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.420780 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovnkube-controller/0.log" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.424531 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerStarted","Data":"609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82"} Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.425093 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.435991 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.447700 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.447747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.447764 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.447788 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.447804 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:10Z","lastTransitionTime":"2026-01-23T16:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.459155 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e86e765e6725af39aba60c8ed74ca97924a0c7d063fe423dba8d71a88854376\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"message\\\":\\\"ewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:08.547921 6018 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 16:17:08.547974 6018 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.548087 6018 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:08.548367 6018 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.548527 6018 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.548696 6018 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.549032 6018 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.549110 6018 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.473356 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.485808 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.513900 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.529023 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.540506 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.550715 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.550775 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.550794 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.550821 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.550839 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:10Z","lastTransitionTime":"2026-01-23T16:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.553536 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.571304 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.584521 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.600245 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.616001 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.632036 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.644227 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.653108 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.653146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.653156 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.653171 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.653181 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:10Z","lastTransitionTime":"2026-01-23T16:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.658973 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.755310 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.755346 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.755355 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.755402 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.755415 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:10Z","lastTransitionTime":"2026-01-23T16:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.858010 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.858053 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.858062 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.858077 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.858087 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:10Z","lastTransitionTime":"2026-01-23T16:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.941785 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.941859 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.941884 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.941915 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.941937 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:10Z","lastTransitionTime":"2026-01-23T16:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:10 crc kubenswrapper[4718]: E0123 16:17:10.965223 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.969991 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.970063 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.970083 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.970110 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:10 crc kubenswrapper[4718]: I0123 16:17:10.970130 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:10Z","lastTransitionTime":"2026-01-23T16:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:11 crc kubenswrapper[4718]: E0123 16:17:11.001155 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.007704 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.007781 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.007800 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.007828 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.007848 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:11Z","lastTransitionTime":"2026-01-23T16:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:11 crc kubenswrapper[4718]: E0123 16:17:11.037682 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.043948 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.043998 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.044032 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.044056 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.044071 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:11Z","lastTransitionTime":"2026-01-23T16:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:11 crc kubenswrapper[4718]: E0123 16:17:11.067366 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.073301 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.073355 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.073376 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.073408 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.073432 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:11Z","lastTransitionTime":"2026-01-23T16:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:11 crc kubenswrapper[4718]: E0123 16:17:11.090835 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: E0123 16:17:11.091095 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.093798 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.093859 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.093879 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.093912 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.093933 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:11Z","lastTransitionTime":"2026-01-23T16:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.100013 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 16:39:05.465052782 +0000 UTC Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.144208 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.144283 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.144239 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:11 crc kubenswrapper[4718]: E0123 16:17:11.144449 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:11 crc kubenswrapper[4718]: E0123 16:17:11.144707 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:11 crc kubenswrapper[4718]: E0123 16:17:11.144900 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.197132 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.197186 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.197197 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.197216 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.197228 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:11Z","lastTransitionTime":"2026-01-23T16:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.301155 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.301221 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.301236 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.301264 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.301280 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:11Z","lastTransitionTime":"2026-01-23T16:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.405157 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.405223 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.405240 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.405266 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.405283 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:11Z","lastTransitionTime":"2026-01-23T16:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.433823 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovnkube-controller/1.log" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.434731 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovnkube-controller/0.log" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.439090 4718 generic.go:334] "Generic (PLEG): container finished" podID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerID="609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82" exitCode=1 Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.439294 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerDied","Data":"609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82"} Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.439460 4718 scope.go:117] "RemoveContainer" containerID="8e86e765e6725af39aba60c8ed74ca97924a0c7d063fe423dba8d71a88854376" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.440294 4718 scope.go:117] "RemoveContainer" containerID="609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82" Jan 23 16:17:11 crc kubenswrapper[4718]: E0123 16:17:11.440669 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.464420 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.480989 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.507912 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.508168 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.508264 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.508355 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.508442 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:11Z","lastTransitionTime":"2026-01-23T16:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.518274 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.541510 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.558595 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.579984 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.607043 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.610872 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.610918 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.610927 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.610945 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.610961 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:11Z","lastTransitionTime":"2026-01-23T16:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.628899 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.644826 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.660841 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.676656 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.692584 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.710592 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.713855 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.713897 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.713911 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.713929 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.713941 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:11Z","lastTransitionTime":"2026-01-23T16:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.725163 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.752182 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e86e765e6725af39aba60c8ed74ca97924a0c7d063fe423dba8d71a88854376\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"message\\\":\\\"ewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:08.547921 6018 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 16:17:08.547974 6018 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.548087 6018 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:08.548367 6018 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.548527 6018 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.548696 6018 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.549032 6018 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.549110 6018 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"tions: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:17:10.403657 6142 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.816834 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.817167 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.817275 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.817387 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.817484 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:11Z","lastTransitionTime":"2026-01-23T16:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.924353 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.925069 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.925098 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.925132 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:11 crc kubenswrapper[4718]: I0123 16:17:11.925156 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:11Z","lastTransitionTime":"2026-01-23T16:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.030574 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.030685 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.030710 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.030749 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.030774 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:12Z","lastTransitionTime":"2026-01-23T16:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.100697 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:30:04.784811867 +0000 UTC Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.135130 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.135184 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.135203 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.135228 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.135245 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:12Z","lastTransitionTime":"2026-01-23T16:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.239194 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.239272 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.239290 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.239319 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.239403 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:12Z","lastTransitionTime":"2026-01-23T16:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.305221 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b"] Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.307004 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.311329 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.311370 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.327517 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.342095 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.342151 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.342169 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.342197 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.342216 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:12Z","lastTransitionTime":"2026-01-23T16:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.350949 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.374255 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.397024 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.404219 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4202bddb-b096-4dfc-b808-a6874059803c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qn56b\" (UID: \"4202bddb-b096-4dfc-b808-a6874059803c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.404326 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4202bddb-b096-4dfc-b808-a6874059803c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qn56b\" (UID: \"4202bddb-b096-4dfc-b808-a6874059803c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.404382 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4202bddb-b096-4dfc-b808-a6874059803c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qn56b\" (UID: \"4202bddb-b096-4dfc-b808-a6874059803c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.404431 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv6dp\" (UniqueName: \"kubernetes.io/projected/4202bddb-b096-4dfc-b808-a6874059803c-kube-api-access-sv6dp\") pod \"ovnkube-control-plane-749d76644c-qn56b\" (UID: \"4202bddb-b096-4dfc-b808-a6874059803c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.423029 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.445172 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.445376 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.445439 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.445457 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.445483 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.445503 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:12Z","lastTransitionTime":"2026-01-23T16:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.447439 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovnkube-controller/1.log" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.453331 4718 scope.go:117] "RemoveContainer" containerID="609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82" Jan 23 16:17:12 crc kubenswrapper[4718]: E0123 16:17:12.453567 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.473844 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e86e765e6725af39aba60c8ed74ca97924a0c7d063fe423dba8d71a88854376\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"message\\\":\\\"ewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:08.547921 6018 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 16:17:08.547974 6018 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.548087 6018 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:08.548367 6018 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.548527 6018 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.548696 6018 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.549032 6018 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:08.549110 6018 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"tions: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:17:10.403657 6142 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.500089 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.506200 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4202bddb-b096-4dfc-b808-a6874059803c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qn56b\" (UID: \"4202bddb-b096-4dfc-b808-a6874059803c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.506387 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4202bddb-b096-4dfc-b808-a6874059803c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qn56b\" (UID: \"4202bddb-b096-4dfc-b808-a6874059803c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.506475 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4202bddb-b096-4dfc-b808-a6874059803c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qn56b\" (UID: \"4202bddb-b096-4dfc-b808-a6874059803c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.506552 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv6dp\" (UniqueName: \"kubernetes.io/projected/4202bddb-b096-4dfc-b808-a6874059803c-kube-api-access-sv6dp\") pod \"ovnkube-control-plane-749d76644c-qn56b\" (UID: \"4202bddb-b096-4dfc-b808-a6874059803c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.507889 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4202bddb-b096-4dfc-b808-a6874059803c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qn56b\" (UID: \"4202bddb-b096-4dfc-b808-a6874059803c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.508478 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4202bddb-b096-4dfc-b808-a6874059803c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qn56b\" (UID: \"4202bddb-b096-4dfc-b808-a6874059803c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.515695 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.518931 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4202bddb-b096-4dfc-b808-a6874059803c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qn56b\" (UID: \"4202bddb-b096-4dfc-b808-a6874059803c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.538257 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.539573 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv6dp\" (UniqueName: \"kubernetes.io/projected/4202bddb-b096-4dfc-b808-a6874059803c-kube-api-access-sv6dp\") pod \"ovnkube-control-plane-749d76644c-qn56b\" (UID: \"4202bddb-b096-4dfc-b808-a6874059803c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.548695 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.548768 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.548794 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.548831 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.548850 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:12Z","lastTransitionTime":"2026-01-23T16:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.569315 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.598421 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.617891 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.631916 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.640090 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.652267 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.652334 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.652353 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.652415 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.652436 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:12Z","lastTransitionTime":"2026-01-23T16:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:12 crc kubenswrapper[4718]: W0123 16:17:12.655737 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4202bddb_b096_4dfc_b808_a6874059803c.slice/crio-b04fe2e751e2a565b93f163bfb48eda7c1c89d7ec14c7ebf8225c62baea22fef WatchSource:0}: Error finding container b04fe2e751e2a565b93f163bfb48eda7c1c89d7ec14c7ebf8225c62baea22fef: Status 404 returned error can't find the container with id b04fe2e751e2a565b93f163bfb48eda7c1c89d7ec14c7ebf8225c62baea22fef Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.659993 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.676065 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.695915 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.715672 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.737377 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.756830 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.756886 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.756900 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.756923 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.756936 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:12Z","lastTransitionTime":"2026-01-23T16:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.757001 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.775369 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.792921 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.812193 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.812327 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:12 crc kubenswrapper[4718]: E0123 16:17:12.812512 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:17:12 crc kubenswrapper[4718]: E0123 16:17:12.812576 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:17:28.812418942 +0000 UTC m=+49.959660973 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:17:12 crc kubenswrapper[4718]: E0123 16:17:12.812687 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:28.812661259 +0000 UTC m=+49.959903460 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.812763 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:12 crc kubenswrapper[4718]: E0123 16:17:12.812977 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:17:12 crc kubenswrapper[4718]: E0123 16:17:12.813149 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:28.813110042 +0000 UTC m=+49.960352073 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.817866 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"tions: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:17:10.403657 6142 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.835285 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.849480 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.862247 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.866595 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.866643 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.866653 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.866677 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.866688 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:12Z","lastTransitionTime":"2026-01-23T16:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.880896 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.895213 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.914105 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.914160 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:12 crc kubenswrapper[4718]: E0123 16:17:12.914301 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:17:12 crc kubenswrapper[4718]: E0123 16:17:12.914318 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:17:12 crc kubenswrapper[4718]: E0123 16:17:12.914330 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:17:12 crc kubenswrapper[4718]: E0123 16:17:12.914382 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:28.914362284 +0000 UTC m=+50.061604275 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:17:12 crc kubenswrapper[4718]: E0123 16:17:12.914720 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:17:12 crc kubenswrapper[4718]: E0123 16:17:12.914741 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:17:12 crc kubenswrapper[4718]: E0123 16:17:12.914751 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:17:12 crc kubenswrapper[4718]: E0123 16:17:12.914785 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:28.914777986 +0000 UTC m=+50.062019977 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.930103 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.948762 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.970392 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.970865 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.970945 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.970966 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.970999 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.971023 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:12Z","lastTransitionTime":"2026-01-23T16:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:12 crc kubenswrapper[4718]: I0123 16:17:12.992578 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.074521 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.074838 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.074984 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.075139 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.075286 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:13Z","lastTransitionTime":"2026-01-23T16:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.101694 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:14:56.722602309 +0000 UTC Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.139856 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.139883 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.140234 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:13 crc kubenswrapper[4718]: E0123 16:17:13.140566 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:13 crc kubenswrapper[4718]: E0123 16:17:13.141466 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:13 crc kubenswrapper[4718]: E0123 16:17:13.141835 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.179003 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.179045 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.179062 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.179081 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.179094 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:13Z","lastTransitionTime":"2026-01-23T16:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.282318 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.282375 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.282387 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.282412 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.282425 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:13Z","lastTransitionTime":"2026-01-23T16:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.385484 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.385554 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.385574 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.385602 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.385619 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:13Z","lastTransitionTime":"2026-01-23T16:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.459106 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" event={"ID":"4202bddb-b096-4dfc-b808-a6874059803c","Type":"ContainerStarted","Data":"04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8"} Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.459193 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" event={"ID":"4202bddb-b096-4dfc-b808-a6874059803c","Type":"ContainerStarted","Data":"61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1"} Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.459215 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" event={"ID":"4202bddb-b096-4dfc-b808-a6874059803c","Type":"ContainerStarted","Data":"b04fe2e751e2a565b93f163bfb48eda7c1c89d7ec14c7ebf8225c62baea22fef"} Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.462422 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-dppxp"] Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.463032 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:13 crc kubenswrapper[4718]: E0123 16:17:13.463121 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.476911 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.489055 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.489093 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.489103 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.489120 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.489134 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:13Z","lastTransitionTime":"2026-01-23T16:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.494969 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.511379 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.520032 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs\") pod \"network-metrics-daemon-dppxp\" (UID: \"593a4237-c13e-4403-b139-f32b552ca770\") " pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.520280 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7xhh\" (UniqueName: \"kubernetes.io/projected/593a4237-c13e-4403-b139-f32b552ca770-kube-api-access-l7xhh\") pod \"network-metrics-daemon-dppxp\" (UID: \"593a4237-c13e-4403-b139-f32b552ca770\") " pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.533867 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.553395 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.575471 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.591250 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.591315 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.591329 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.591351 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.591367 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:13Z","lastTransitionTime":"2026-01-23T16:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.591581 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.621344 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs\") pod \"network-metrics-daemon-dppxp\" (UID: \"593a4237-c13e-4403-b139-f32b552ca770\") " pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.621504 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7xhh\" (UniqueName: \"kubernetes.io/projected/593a4237-c13e-4403-b139-f32b552ca770-kube-api-access-l7xhh\") pod \"network-metrics-daemon-dppxp\" (UID: \"593a4237-c13e-4403-b139-f32b552ca770\") " pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:13 crc kubenswrapper[4718]: E0123 16:17:13.621582 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:17:13 crc kubenswrapper[4718]: E0123 16:17:13.621718 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs podName:593a4237-c13e-4403-b139-f32b552ca770 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:14.121688687 +0000 UTC m=+35.268930678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs") pod "network-metrics-daemon-dppxp" (UID: "593a4237-c13e-4403-b139-f32b552ca770") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.622748 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.641989 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.654569 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7xhh\" (UniqueName: \"kubernetes.io/projected/593a4237-c13e-4403-b139-f32b552ca770-kube-api-access-l7xhh\") pod \"network-metrics-daemon-dppxp\" (UID: \"593a4237-c13e-4403-b139-f32b552ca770\") " pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.681766 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.695263 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.695325 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.695348 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.695376 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.695395 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:13Z","lastTransitionTime":"2026-01-23T16:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.703223 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.718933 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.732899 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.748778 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.759613 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.790205 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"tions: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:17:10.403657 6142 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.797836 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.797880 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.797894 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.797914 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.797928 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:13Z","lastTransitionTime":"2026-01-23T16:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.826488 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.847445 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.870067 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.890405 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.900245 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.900290 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.900301 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.900319 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.900330 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:13Z","lastTransitionTime":"2026-01-23T16:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.903582 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.915295 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.940308 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"tions: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:17:10.403657 6142 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.955056 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dppxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"593a4237-c13e-4403-b139-f32b552ca770\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dppxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.967357 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.982460 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:13 crc kubenswrapper[4718]: I0123 16:17:13.994947 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.003658 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.003703 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.003720 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.003740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.003753 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:14Z","lastTransitionTime":"2026-01-23T16:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.009712 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.020604 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.035758 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.044908 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.071518 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.087363 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.103611 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:23:14.455334102 +0000 UTC Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.106499 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.106540 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.106555 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.106577 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.106592 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:14Z","lastTransitionTime":"2026-01-23T16:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.126993 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs\") pod \"network-metrics-daemon-dppxp\" (UID: \"593a4237-c13e-4403-b139-f32b552ca770\") " pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:14 crc kubenswrapper[4718]: E0123 16:17:14.127198 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:17:14 crc kubenswrapper[4718]: E0123 16:17:14.127264 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs podName:593a4237-c13e-4403-b139-f32b552ca770 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:15.127248848 +0000 UTC m=+36.274490839 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs") pod "network-metrics-daemon-dppxp" (UID: "593a4237-c13e-4403-b139-f32b552ca770") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.209454 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.209499 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.209510 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.209530 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.209543 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:14Z","lastTransitionTime":"2026-01-23T16:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.313035 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.313117 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.313140 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.313181 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.313206 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:14Z","lastTransitionTime":"2026-01-23T16:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.416893 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.417005 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.417029 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.417064 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.417089 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:14Z","lastTransitionTime":"2026-01-23T16:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.521781 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.521865 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.521894 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.521925 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.521951 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:14Z","lastTransitionTime":"2026-01-23T16:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.626337 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.626415 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.626439 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.626470 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.626496 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:14Z","lastTransitionTime":"2026-01-23T16:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.730472 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.730536 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.730602 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.730666 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.730691 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:14Z","lastTransitionTime":"2026-01-23T16:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.834596 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.834680 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.834692 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.834717 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.834731 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:14Z","lastTransitionTime":"2026-01-23T16:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.938503 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.938594 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.938684 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.938740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:14 crc kubenswrapper[4718]: I0123 16:17:14.938762 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:14Z","lastTransitionTime":"2026-01-23T16:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.041897 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.041963 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.041981 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.042008 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.042028 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:15Z","lastTransitionTime":"2026-01-23T16:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.104106 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 01:36:17.709081611 +0000 UTC Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.140024 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.140123 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.140134 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.140278 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs\") pod \"network-metrics-daemon-dppxp\" (UID: \"593a4237-c13e-4403-b139-f32b552ca770\") " pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:15 crc kubenswrapper[4718]: E0123 16:17:15.140281 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.140293 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:15 crc kubenswrapper[4718]: E0123 16:17:15.140374 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:15 crc kubenswrapper[4718]: E0123 16:17:15.140458 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:17:15 crc kubenswrapper[4718]: E0123 16:17:15.140556 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:15 crc kubenswrapper[4718]: E0123 16:17:15.140578 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs podName:593a4237-c13e-4403-b139-f32b552ca770 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:17.140548122 +0000 UTC m=+38.287790133 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs") pod "network-metrics-daemon-dppxp" (UID: "593a4237-c13e-4403-b139-f32b552ca770") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:17:15 crc kubenswrapper[4718]: E0123 16:17:15.140461 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.144996 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.145033 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.145048 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.145069 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.145084 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:15Z","lastTransitionTime":"2026-01-23T16:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.248449 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.248544 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.248563 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.248614 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.248677 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:15Z","lastTransitionTime":"2026-01-23T16:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.352426 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.352488 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.352505 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.352535 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.352556 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:15Z","lastTransitionTime":"2026-01-23T16:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.455462 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.455535 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.455553 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.455582 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.455601 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:15Z","lastTransitionTime":"2026-01-23T16:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.559753 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.559800 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.559812 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.559831 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.559844 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:15Z","lastTransitionTime":"2026-01-23T16:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.662830 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.662887 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.662900 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.662924 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.662938 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:15Z","lastTransitionTime":"2026-01-23T16:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.766317 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.766379 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.766394 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.766421 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.766438 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:15Z","lastTransitionTime":"2026-01-23T16:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.869314 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.869367 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.869376 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.869393 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.869404 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:15Z","lastTransitionTime":"2026-01-23T16:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.972808 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.972848 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.972863 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.972880 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:15 crc kubenswrapper[4718]: I0123 16:17:15.972893 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:15Z","lastTransitionTime":"2026-01-23T16:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.076820 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.076876 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.076898 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.076927 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.076948 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:16Z","lastTransitionTime":"2026-01-23T16:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.104817 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:29:32.658917948 +0000 UTC Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.180801 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.180954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.180979 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.181006 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.181028 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:16Z","lastTransitionTime":"2026-01-23T16:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.284229 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.284278 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.284288 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.284306 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.284317 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:16Z","lastTransitionTime":"2026-01-23T16:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.387138 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.387212 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.387230 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.387702 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.387755 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:16Z","lastTransitionTime":"2026-01-23T16:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.491575 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.491641 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.491661 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.491737 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.491772 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:16Z","lastTransitionTime":"2026-01-23T16:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.595897 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.595962 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.595978 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.596001 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.596020 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:16Z","lastTransitionTime":"2026-01-23T16:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.700518 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.700598 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.700611 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.700660 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.700676 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:16Z","lastTransitionTime":"2026-01-23T16:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.804024 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.804094 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.804107 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.804134 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.804153 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:16Z","lastTransitionTime":"2026-01-23T16:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.912032 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.912109 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.912121 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.912147 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:16 crc kubenswrapper[4718]: I0123 16:17:16.912161 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:16Z","lastTransitionTime":"2026-01-23T16:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.015045 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.015112 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.015133 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.015165 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.015186 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:17Z","lastTransitionTime":"2026-01-23T16:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.105909 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 11:47:13.873022112 +0000 UTC Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.118205 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.118290 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.118306 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.118326 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.118343 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:17Z","lastTransitionTime":"2026-01-23T16:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.140451 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.140514 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.140591 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.140467 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:17 crc kubenswrapper[4718]: E0123 16:17:17.140750 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:17 crc kubenswrapper[4718]: E0123 16:17:17.140857 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:17 crc kubenswrapper[4718]: E0123 16:17:17.141010 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:17 crc kubenswrapper[4718]: E0123 16:17:17.141077 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.168040 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs\") pod \"network-metrics-daemon-dppxp\" (UID: \"593a4237-c13e-4403-b139-f32b552ca770\") " pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:17 crc kubenswrapper[4718]: E0123 16:17:17.168477 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:17:17 crc kubenswrapper[4718]: E0123 16:17:17.168780 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs podName:593a4237-c13e-4403-b139-f32b552ca770 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:21.168741666 +0000 UTC m=+42.315983807 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs") pod "network-metrics-daemon-dppxp" (UID: "593a4237-c13e-4403-b139-f32b552ca770") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.222121 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.222176 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.222188 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.222210 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.222224 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:17Z","lastTransitionTime":"2026-01-23T16:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.325432 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.325508 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.325530 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.325558 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.325578 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:17Z","lastTransitionTime":"2026-01-23T16:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.429794 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.429855 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.429871 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.429896 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.429913 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:17Z","lastTransitionTime":"2026-01-23T16:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.533525 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.533600 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.533618 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.533650 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.533715 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:17Z","lastTransitionTime":"2026-01-23T16:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.636361 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.636432 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.636457 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.636490 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.636518 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:17Z","lastTransitionTime":"2026-01-23T16:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.740043 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.740131 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.740158 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.740190 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.740215 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:17Z","lastTransitionTime":"2026-01-23T16:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.844598 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.844729 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.844783 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.844819 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.844875 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:17Z","lastTransitionTime":"2026-01-23T16:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.949327 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.949412 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.949433 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.949462 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:17 crc kubenswrapper[4718]: I0123 16:17:17.949481 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:17Z","lastTransitionTime":"2026-01-23T16:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.053491 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.053594 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.053619 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.053697 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.053722 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:18Z","lastTransitionTime":"2026-01-23T16:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.107107 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 01:53:03.635010788 +0000 UTC Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.156861 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.156957 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.156976 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.157037 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.157060 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:18Z","lastTransitionTime":"2026-01-23T16:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.260421 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.260495 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.260521 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.260554 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.260579 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:18Z","lastTransitionTime":"2026-01-23T16:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.365298 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.365362 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.365380 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.365408 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.365430 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:18Z","lastTransitionTime":"2026-01-23T16:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.469669 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.469731 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.469748 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.469779 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.469802 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:18Z","lastTransitionTime":"2026-01-23T16:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.573912 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.573993 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.574015 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.574044 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.574063 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:18Z","lastTransitionTime":"2026-01-23T16:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.677498 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.677572 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.677590 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.677618 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.677667 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:18Z","lastTransitionTime":"2026-01-23T16:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.781044 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.781122 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.781145 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.781178 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.781201 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:18Z","lastTransitionTime":"2026-01-23T16:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.885014 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.885060 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.885072 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.885093 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.885107 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:18Z","lastTransitionTime":"2026-01-23T16:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.987817 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.987922 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.987943 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.987973 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:18 crc kubenswrapper[4718]: I0123 16:17:18.987995 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:18Z","lastTransitionTime":"2026-01-23T16:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.093209 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.093311 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.093330 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.093362 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.093382 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:19Z","lastTransitionTime":"2026-01-23T16:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.107593 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:25:35.244939376 +0000 UTC Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.140086 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.140104 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:19 crc kubenswrapper[4718]: E0123 16:17:19.140360 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.140452 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:19 crc kubenswrapper[4718]: E0123 16:17:19.140840 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:19 crc kubenswrapper[4718]: E0123 16:17:19.141056 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.141239 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:19 crc kubenswrapper[4718]: E0123 16:17:19.141546 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.165841 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.183776 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.197360 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.197707 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.197858 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.198201 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.198235 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:19Z","lastTransitionTime":"2026-01-23T16:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.199990 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.223290 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.252523 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.272139 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.301060 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.301145 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.301172 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.301211 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.301236 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:19Z","lastTransitionTime":"2026-01-23T16:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.308759 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.336585 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.362939 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.381890 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.401314 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.404698 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.404782 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.404810 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.404844 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.404871 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:19Z","lastTransitionTime":"2026-01-23T16:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.422707 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.444442 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.465256 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.481791 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.508431 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.508503 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.508527 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.508560 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.508636 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:19Z","lastTransitionTime":"2026-01-23T16:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.518472 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"tions: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:17:10.403657 6142 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.539742 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dppxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"593a4237-c13e-4403-b139-f32b552ca770\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dppxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.619733 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.619803 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.619823 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.619853 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.619871 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:19Z","lastTransitionTime":"2026-01-23T16:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.723153 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.723287 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.723313 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.723385 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.723416 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:19Z","lastTransitionTime":"2026-01-23T16:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.826987 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.827126 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.827151 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.827220 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.827243 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:19Z","lastTransitionTime":"2026-01-23T16:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.930801 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.930877 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.930900 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.930931 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:19 crc kubenswrapper[4718]: I0123 16:17:19.930957 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:19Z","lastTransitionTime":"2026-01-23T16:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.034599 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.034726 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.034751 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.034787 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.034813 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:20Z","lastTransitionTime":"2026-01-23T16:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.108392 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:38:21.641867467 +0000 UTC Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.138710 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.138788 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.138806 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.138837 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.138860 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:20Z","lastTransitionTime":"2026-01-23T16:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.241689 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.241771 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.241837 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.241881 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.241909 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:20Z","lastTransitionTime":"2026-01-23T16:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.344984 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.345042 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.345058 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.345083 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.345103 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:20Z","lastTransitionTime":"2026-01-23T16:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.448626 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.448732 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.448750 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.448779 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.448806 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:20Z","lastTransitionTime":"2026-01-23T16:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.552523 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.552592 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.552610 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.552754 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.552774 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:20Z","lastTransitionTime":"2026-01-23T16:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.656891 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.656968 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.656989 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.657019 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.657040 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:20Z","lastTransitionTime":"2026-01-23T16:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.761036 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.761086 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.761098 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.761121 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.761136 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:20Z","lastTransitionTime":"2026-01-23T16:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.864336 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.864416 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.864433 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.864464 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.864484 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:20Z","lastTransitionTime":"2026-01-23T16:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.968744 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.968809 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.968826 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.968857 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:20 crc kubenswrapper[4718]: I0123 16:17:20.968876 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:20Z","lastTransitionTime":"2026-01-23T16:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.072105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.072161 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.072173 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.072193 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.072206 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:21Z","lastTransitionTime":"2026-01-23T16:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.108555 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 08:40:53.358700459 +0000 UTC Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.140349 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.140387 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.140503 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:21 crc kubenswrapper[4718]: E0123 16:17:21.140511 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:21 crc kubenswrapper[4718]: E0123 16:17:21.140659 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.140727 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:21 crc kubenswrapper[4718]: E0123 16:17:21.140783 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:21 crc kubenswrapper[4718]: E0123 16:17:21.140834 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.175563 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.175673 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.175694 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.175724 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.175744 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:21Z","lastTransitionTime":"2026-01-23T16:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.218488 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs\") pod \"network-metrics-daemon-dppxp\" (UID: \"593a4237-c13e-4403-b139-f32b552ca770\") " pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:21 crc kubenswrapper[4718]: E0123 16:17:21.218813 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:17:21 crc kubenswrapper[4718]: E0123 16:17:21.218942 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs podName:593a4237-c13e-4403-b139-f32b552ca770 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:29.218910336 +0000 UTC m=+50.366152367 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs") pod "network-metrics-daemon-dppxp" (UID: "593a4237-c13e-4403-b139-f32b552ca770") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.279062 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.279127 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.279146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.279176 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.279199 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:21Z","lastTransitionTime":"2026-01-23T16:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.382291 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.382389 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.382408 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.382465 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.382487 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:21Z","lastTransitionTime":"2026-01-23T16:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.409453 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.409528 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.409546 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.409576 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.409598 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:21Z","lastTransitionTime":"2026-01-23T16:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:21 crc kubenswrapper[4718]: E0123 16:17:21.424669 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.431048 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.431102 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.431119 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.431148 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.431165 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:21Z","lastTransitionTime":"2026-01-23T16:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:21 crc kubenswrapper[4718]: E0123 16:17:21.447899 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.453998 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.454053 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.454065 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.454084 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.454095 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:21Z","lastTransitionTime":"2026-01-23T16:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:21 crc kubenswrapper[4718]: E0123 16:17:21.466775 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.471494 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.471570 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.471593 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.471619 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.471682 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:21Z","lastTransitionTime":"2026-01-23T16:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:21 crc kubenswrapper[4718]: E0123 16:17:21.494568 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.498956 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.499034 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.499055 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.499084 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.499105 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:21Z","lastTransitionTime":"2026-01-23T16:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:21 crc kubenswrapper[4718]: E0123 16:17:21.519399 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:21 crc kubenswrapper[4718]: E0123 16:17:21.519515 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.522049 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.522072 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.522080 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.522097 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.522109 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:21Z","lastTransitionTime":"2026-01-23T16:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.624172 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.624222 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.624233 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.624251 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.624263 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:21Z","lastTransitionTime":"2026-01-23T16:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.728183 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.728266 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.728293 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.728327 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.728363 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:21Z","lastTransitionTime":"2026-01-23T16:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.832282 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.832332 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.832341 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.832360 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.832374 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:21Z","lastTransitionTime":"2026-01-23T16:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.936223 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.936287 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.936309 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.936342 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:21 crc kubenswrapper[4718]: I0123 16:17:21.936365 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:21Z","lastTransitionTime":"2026-01-23T16:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.039396 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.039460 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.039483 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.039509 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.039527 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:22Z","lastTransitionTime":"2026-01-23T16:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.109018 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:18:27.006760694 +0000 UTC Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.143402 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.143461 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.143473 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.143496 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.143510 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:22Z","lastTransitionTime":"2026-01-23T16:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.247546 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.247603 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.247616 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.247638 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.247669 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:22Z","lastTransitionTime":"2026-01-23T16:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.351352 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.351395 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.351407 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.351427 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.351438 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:22Z","lastTransitionTime":"2026-01-23T16:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.453778 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.453810 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.453821 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.453839 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.453851 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:22Z","lastTransitionTime":"2026-01-23T16:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.557836 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.557881 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.557890 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.557927 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.557937 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:22Z","lastTransitionTime":"2026-01-23T16:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.661425 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.661496 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.661507 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.661529 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.661543 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:22Z","lastTransitionTime":"2026-01-23T16:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.768103 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.768187 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.768205 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.768235 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.768253 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:22Z","lastTransitionTime":"2026-01-23T16:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.870797 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.870874 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.870892 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.870923 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.870944 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:22Z","lastTransitionTime":"2026-01-23T16:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.974521 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.974582 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.974598 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.974623 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:22 crc kubenswrapper[4718]: I0123 16:17:22.974686 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:22Z","lastTransitionTime":"2026-01-23T16:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.078411 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.078480 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.078498 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.078529 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.078550 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:23Z","lastTransitionTime":"2026-01-23T16:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.109976 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:44:10.50438728 +0000 UTC Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.139525 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.139602 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.139625 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.139600 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:23 crc kubenswrapper[4718]: E0123 16:17:23.139902 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:23 crc kubenswrapper[4718]: E0123 16:17:23.140048 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:23 crc kubenswrapper[4718]: E0123 16:17:23.140224 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:23 crc kubenswrapper[4718]: E0123 16:17:23.140329 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.182766 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.182813 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.182827 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.182855 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.182870 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:23Z","lastTransitionTime":"2026-01-23T16:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.285541 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.285597 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.285614 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.285689 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.285710 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:23Z","lastTransitionTime":"2026-01-23T16:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.389958 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.390005 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.390014 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.390033 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.390047 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:23Z","lastTransitionTime":"2026-01-23T16:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.493146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.493274 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.493299 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.493332 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.493357 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:23Z","lastTransitionTime":"2026-01-23T16:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.597167 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.597240 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.597263 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.597329 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.597355 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:23Z","lastTransitionTime":"2026-01-23T16:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.702164 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.702241 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.702259 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.702297 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.702340 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:23Z","lastTransitionTime":"2026-01-23T16:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.805552 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.805606 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.805626 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.805697 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.805719 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:23Z","lastTransitionTime":"2026-01-23T16:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.908586 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.908688 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.908706 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.908737 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:23 crc kubenswrapper[4718]: I0123 16:17:23.908771 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:23Z","lastTransitionTime":"2026-01-23T16:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.012275 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.012371 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.012397 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.012848 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.012888 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:24Z","lastTransitionTime":"2026-01-23T16:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.110467 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:45:59.164670401 +0000 UTC Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.116039 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.116107 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.116125 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.116153 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.116173 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:24Z","lastTransitionTime":"2026-01-23T16:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.219536 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.219660 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.219679 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.219704 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.219724 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:24Z","lastTransitionTime":"2026-01-23T16:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.323478 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.323546 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.323566 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.323595 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.323614 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:24Z","lastTransitionTime":"2026-01-23T16:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.427293 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.427388 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.427414 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.427454 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.427481 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:24Z","lastTransitionTime":"2026-01-23T16:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.530861 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.530930 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.530954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.530987 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.531011 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:24Z","lastTransitionTime":"2026-01-23T16:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.634511 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.634585 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.634608 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.634685 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.634705 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:24Z","lastTransitionTime":"2026-01-23T16:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.737374 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.737440 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.737457 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.737485 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.737503 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:24Z","lastTransitionTime":"2026-01-23T16:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.841056 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.841107 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.841119 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.841144 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.841157 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:24Z","lastTransitionTime":"2026-01-23T16:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.944872 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.945772 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.945820 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.945849 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:24 crc kubenswrapper[4718]: I0123 16:17:24.945870 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:24Z","lastTransitionTime":"2026-01-23T16:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.049535 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.049941 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.050093 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.050434 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.050570 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:25Z","lastTransitionTime":"2026-01-23T16:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.111561 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 00:41:27.650386494 +0000 UTC Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.140129 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:25 crc kubenswrapper[4718]: E0123 16:17:25.140397 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.140864 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:25 crc kubenswrapper[4718]: E0123 16:17:25.141055 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.141138 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:25 crc kubenswrapper[4718]: E0123 16:17:25.141311 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.142150 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:25 crc kubenswrapper[4718]: E0123 16:17:25.142403 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.142534 4718 scope.go:117] "RemoveContainer" containerID="609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.155748 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.155814 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.155834 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.155857 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.155876 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:25Z","lastTransitionTime":"2026-01-23T16:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.261069 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.261424 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.261442 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.261468 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.261487 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:25Z","lastTransitionTime":"2026-01-23T16:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.365505 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.365582 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.365608 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.365710 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.365738 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:25Z","lastTransitionTime":"2026-01-23T16:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.469303 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.469520 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.469552 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.469677 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.469712 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:25Z","lastTransitionTime":"2026-01-23T16:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.519998 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovnkube-controller/1.log" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.525336 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerStarted","Data":"b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6"} Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.526351 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.549506 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.573476 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.574076 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.574164 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.574182 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.574242 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.574262 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:25Z","lastTransitionTime":"2026-01-23T16:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.606204 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.661254 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.677769 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.677812 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.677824 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.677862 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.677875 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:25Z","lastTransitionTime":"2026-01-23T16:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.680498 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.701200 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"tions: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:17:10.403657 6142 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.718290 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dppxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"593a4237-c13e-4403-b139-f32b552ca770\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dppxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.732167 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.758800 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.778502 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.781338 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.781493 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.781581 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.781707 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.781836 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:25Z","lastTransitionTime":"2026-01-23T16:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.798505 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.826045 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.852994 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.877891 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.885336 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.885365 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.885377 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.885398 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.885412 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:25Z","lastTransitionTime":"2026-01-23T16:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.910862 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.926958 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.939042 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.988382 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.988416 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.988425 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.988439 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:25 crc kubenswrapper[4718]: I0123 16:17:25.988448 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:25Z","lastTransitionTime":"2026-01-23T16:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.090727 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.091164 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.091313 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.091505 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.091764 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:26Z","lastTransitionTime":"2026-01-23T16:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.111767 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 23:26:36.924412518 +0000 UTC Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.195268 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.195850 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.196006 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.196153 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.196324 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:26Z","lastTransitionTime":"2026-01-23T16:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.300084 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.300137 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.300150 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.300170 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.300183 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:26Z","lastTransitionTime":"2026-01-23T16:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.402316 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.402377 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.402395 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.402452 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.402476 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:26Z","lastTransitionTime":"2026-01-23T16:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.505734 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.505806 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.505828 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.505857 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.506093 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:26Z","lastTransitionTime":"2026-01-23T16:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.609783 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.609861 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.609888 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.609929 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.609958 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:26Z","lastTransitionTime":"2026-01-23T16:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.712921 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.712981 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.713523 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.713567 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.713586 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:26Z","lastTransitionTime":"2026-01-23T16:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.822065 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.822146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.822166 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.822197 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.822215 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:26Z","lastTransitionTime":"2026-01-23T16:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.926921 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.926996 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.927013 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.927043 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:26 crc kubenswrapper[4718]: I0123 16:17:26.927063 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:26Z","lastTransitionTime":"2026-01-23T16:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.030946 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.031022 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.031041 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.031073 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.031094 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:27Z","lastTransitionTime":"2026-01-23T16:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.113599 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 14:42:55.831974439 +0000 UTC Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.135071 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.135137 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.135153 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.135210 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.135229 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:27Z","lastTransitionTime":"2026-01-23T16:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.139545 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:27 crc kubenswrapper[4718]: E0123 16:17:27.139701 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.140202 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:27 crc kubenswrapper[4718]: E0123 16:17:27.140274 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.140331 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:27 crc kubenswrapper[4718]: E0123 16:17:27.140377 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.140426 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:27 crc kubenswrapper[4718]: E0123 16:17:27.140476 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.239656 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.239729 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.239742 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.239766 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.239781 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:27Z","lastTransitionTime":"2026-01-23T16:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.344251 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.344324 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.344352 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.344383 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.344408 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:27Z","lastTransitionTime":"2026-01-23T16:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.448236 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.448318 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.448338 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.448367 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.448388 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:27Z","lastTransitionTime":"2026-01-23T16:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.543928 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovnkube-controller/2.log" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.545182 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovnkube-controller/1.log" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.551403 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.551464 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.551528 4718 generic.go:334] "Generic (PLEG): container finished" podID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerID="b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6" exitCode=1 Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.551593 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerDied","Data":"b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6"} Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.551749 4718 scope.go:117] "RemoveContainer" containerID="609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.551488 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.551901 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.551941 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:27Z","lastTransitionTime":"2026-01-23T16:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.554806 4718 scope.go:117] "RemoveContainer" containerID="b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6" Jan 23 16:17:27 crc kubenswrapper[4718]: E0123 16:17:27.556216 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.579488 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:27Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.598566 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:27Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.621031 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:27Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.655047 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.655110 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.655127 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.655154 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.655172 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:27Z","lastTransitionTime":"2026-01-23T16:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.657042 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:27Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.683186 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:27Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.708888 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:27Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.727269 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:27Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.749190 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:27Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.758782 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.758838 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.758852 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.758874 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.758892 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:27Z","lastTransitionTime":"2026-01-23T16:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.767875 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:27Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.790075 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:27Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.812088 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:27Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.839429 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:27Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.860494 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:27Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.862907 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.862989 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.863012 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.863041 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.863063 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:27Z","lastTransitionTime":"2026-01-23T16:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.885020 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:27Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.902020 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:27Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.935313 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"tions: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:17:10.403657 6142 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:26Z\\\",\\\"message\\\":\\\"reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:26.296431 6357 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 16:17:26.296521 6357 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296566 6357 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296656 6357 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296793 6357 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:17:26.297167 6357 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:27Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.956691 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dppxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"593a4237-c13e-4403-b139-f32b552ca770\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dppxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:27Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.967705 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.967803 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.967838 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.967875 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:27 crc kubenswrapper[4718]: I0123 16:17:27.967897 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:27Z","lastTransitionTime":"2026-01-23T16:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.071850 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.071984 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.072002 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.072030 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.072050 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:28Z","lastTransitionTime":"2026-01-23T16:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.114560 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:02:04.334911028 +0000 UTC Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.176478 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.176559 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.176579 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.176612 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.176676 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:28Z","lastTransitionTime":"2026-01-23T16:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.280310 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.280384 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.280402 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.280430 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.280449 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:28Z","lastTransitionTime":"2026-01-23T16:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.384583 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.384690 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.384711 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.384741 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.384764 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:28Z","lastTransitionTime":"2026-01-23T16:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.489312 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.489401 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.489420 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.489451 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.489477 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:28Z","lastTransitionTime":"2026-01-23T16:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.560091 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovnkube-controller/2.log" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.593252 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.593312 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.593326 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.593351 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.593369 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:28Z","lastTransitionTime":"2026-01-23T16:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.696496 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.696560 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.696572 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.696593 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.696603 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:28Z","lastTransitionTime":"2026-01-23T16:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.800043 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.800122 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.800141 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.800170 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.800193 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:28Z","lastTransitionTime":"2026-01-23T16:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.820241 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:17:28 crc kubenswrapper[4718]: E0123 16:17:28.820494 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:18:00.820453288 +0000 UTC m=+81.967695319 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.820680 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.820783 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:28 crc kubenswrapper[4718]: E0123 16:17:28.820962 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:17:28 crc kubenswrapper[4718]: E0123 16:17:28.821006 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:17:28 crc kubenswrapper[4718]: E0123 16:17:28.821108 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:18:00.821078196 +0000 UTC m=+81.968320217 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:17:28 crc kubenswrapper[4718]: E0123 16:17:28.821155 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:18:00.821139037 +0000 UTC m=+81.968381058 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.903112 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.903177 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.903200 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.903235 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.903266 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:28Z","lastTransitionTime":"2026-01-23T16:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.922041 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:28 crc kubenswrapper[4718]: I0123 16:17:28.922121 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:28 crc kubenswrapper[4718]: E0123 16:17:28.922353 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:17:28 crc kubenswrapper[4718]: E0123 16:17:28.922399 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:17:28 crc kubenswrapper[4718]: E0123 16:17:28.922420 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:17:28 crc kubenswrapper[4718]: E0123 16:17:28.922493 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 16:18:00.922466182 +0000 UTC m=+82.069708203 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:17:28 crc kubenswrapper[4718]: E0123 16:17:28.922353 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:17:28 crc kubenswrapper[4718]: E0123 16:17:28.922571 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:17:28 crc kubenswrapper[4718]: E0123 16:17:28.922598 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:17:28 crc kubenswrapper[4718]: E0123 16:17:28.922727 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 16:18:00.922700908 +0000 UTC m=+82.069942929 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.006916 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.007176 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.007196 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.007228 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.007249 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:29Z","lastTransitionTime":"2026-01-23T16:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.110783 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.110853 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.110874 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.110905 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.110927 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:29Z","lastTransitionTime":"2026-01-23T16:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.115146 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:27:48.795196312 +0000 UTC Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.140168 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.140248 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.140286 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.140297 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:29 crc kubenswrapper[4718]: E0123 16:17:29.140418 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:29 crc kubenswrapper[4718]: E0123 16:17:29.140537 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:29 crc kubenswrapper[4718]: E0123 16:17:29.140660 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:29 crc kubenswrapper[4718]: E0123 16:17:29.140724 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.160719 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:29Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.182185 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:29Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.201838 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:29Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.215026 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.215083 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.215102 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.215129 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.215149 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:29Z","lastTransitionTime":"2026-01-23T16:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.225280 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:29Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.226282 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs\") pod \"network-metrics-daemon-dppxp\" (UID: \"593a4237-c13e-4403-b139-f32b552ca770\") " pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:29 crc kubenswrapper[4718]: E0123 16:17:29.226661 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:17:29 crc kubenswrapper[4718]: E0123 16:17:29.226866 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs podName:593a4237-c13e-4403-b139-f32b552ca770 nodeName:}" failed. No retries permitted until 2026-01-23 16:17:45.226812555 +0000 UTC m=+66.374054746 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs") pod "network-metrics-daemon-dppxp" (UID: "593a4237-c13e-4403-b139-f32b552ca770") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.246481 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:29Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.273370 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:29Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.289877 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:29Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.318950 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.319015 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.319035 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.319067 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.319087 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:29Z","lastTransitionTime":"2026-01-23T16:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.320625 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:29Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.348923 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:29Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.368862 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:29Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.393968 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:29Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.421873 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.421936 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.421953 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.421984 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.422005 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:29Z","lastTransitionTime":"2026-01-23T16:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.434982 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:29Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.467430 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:29Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.488689 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:29Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.498928 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:29Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.523587 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"tions: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:17:10.403657 6142 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:26Z\\\",\\\"message\\\":\\\"reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:26.296431 6357 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 16:17:26.296521 6357 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296566 6357 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296656 6357 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296793 6357 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:17:26.297167 6357 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:29Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.524830 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.524859 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.524869 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.524887 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.524899 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:29Z","lastTransitionTime":"2026-01-23T16:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.537408 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dppxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"593a4237-c13e-4403-b139-f32b552ca770\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dppxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:29Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.628054 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.628110 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.628119 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.628142 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.628154 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:29Z","lastTransitionTime":"2026-01-23T16:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.731686 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.731727 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.731736 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.731758 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.731770 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:29Z","lastTransitionTime":"2026-01-23T16:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.834542 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.834660 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.834674 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.834691 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.834701 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:29Z","lastTransitionTime":"2026-01-23T16:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.937295 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.937364 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.937375 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.937394 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:29 crc kubenswrapper[4718]: I0123 16:17:29.937408 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:29Z","lastTransitionTime":"2026-01-23T16:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.039998 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.040050 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.040064 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.040082 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.040302 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:30Z","lastTransitionTime":"2026-01-23T16:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.115976 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 09:15:32.785334674 +0000 UTC Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.143941 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.143987 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.144000 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.144024 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.144038 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:30Z","lastTransitionTime":"2026-01-23T16:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.247271 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.247329 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.247343 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.247364 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.247379 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:30Z","lastTransitionTime":"2026-01-23T16:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.350647 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.350707 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.350720 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.350745 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.350760 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:30Z","lastTransitionTime":"2026-01-23T16:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.454273 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.454325 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.454338 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.454357 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.454372 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:30Z","lastTransitionTime":"2026-01-23T16:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.557822 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.557957 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.557984 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.558021 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.558050 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:30Z","lastTransitionTime":"2026-01-23T16:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.660663 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.660716 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.660733 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.660753 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.660767 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:30Z","lastTransitionTime":"2026-01-23T16:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.763946 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.764033 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.764056 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.764090 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.764112 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:30Z","lastTransitionTime":"2026-01-23T16:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.867834 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.867891 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.867902 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.867957 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.867975 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:30Z","lastTransitionTime":"2026-01-23T16:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.970540 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.970604 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.970622 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.970683 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:30 crc kubenswrapper[4718]: I0123 16:17:30.970707 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:30Z","lastTransitionTime":"2026-01-23T16:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.074061 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.074139 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.074163 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.074196 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.074215 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:31Z","lastTransitionTime":"2026-01-23T16:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.116681 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:25:41.34780684 +0000 UTC Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.140188 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:31 crc kubenswrapper[4718]: E0123 16:17:31.140345 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.140398 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.140421 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:31 crc kubenswrapper[4718]: E0123 16:17:31.140469 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.140497 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:31 crc kubenswrapper[4718]: E0123 16:17:31.140537 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:31 crc kubenswrapper[4718]: E0123 16:17:31.140776 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.177340 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.177383 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.177422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.177445 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.177458 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:31Z","lastTransitionTime":"2026-01-23T16:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.280231 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.280297 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.280310 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.280336 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.280353 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:31Z","lastTransitionTime":"2026-01-23T16:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.382995 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.383048 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.383064 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.383086 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.383101 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:31Z","lastTransitionTime":"2026-01-23T16:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.485491 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.485555 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.485574 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.485606 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.485664 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:31Z","lastTransitionTime":"2026-01-23T16:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.589130 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.589210 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.589230 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.589259 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.589279 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:31Z","lastTransitionTime":"2026-01-23T16:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.692373 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.692427 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.692447 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.692473 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.692493 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:31Z","lastTransitionTime":"2026-01-23T16:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.751294 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.751358 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.751376 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.751407 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.751433 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:31Z","lastTransitionTime":"2026-01-23T16:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:31 crc kubenswrapper[4718]: E0123 16:17:31.771256 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:31Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.775536 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.775600 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.775610 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.775651 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.775667 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:31Z","lastTransitionTime":"2026-01-23T16:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:31 crc kubenswrapper[4718]: E0123 16:17:31.794738 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:31Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.800189 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.800246 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.800258 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.800281 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.800293 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:31Z","lastTransitionTime":"2026-01-23T16:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:31 crc kubenswrapper[4718]: E0123 16:17:31.818957 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:31Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.825346 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.825391 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.825424 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.825445 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.825459 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:31Z","lastTransitionTime":"2026-01-23T16:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:31 crc kubenswrapper[4718]: E0123 16:17:31.837859 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:31Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.842204 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.842342 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.842422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.842505 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.842571 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:31Z","lastTransitionTime":"2026-01-23T16:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:31 crc kubenswrapper[4718]: E0123 16:17:31.856712 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:31Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:31 crc kubenswrapper[4718]: E0123 16:17:31.856834 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.859027 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.859068 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.859122 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.859144 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.859157 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:31Z","lastTransitionTime":"2026-01-23T16:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.961648 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.961691 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.961702 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.961723 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:31 crc kubenswrapper[4718]: I0123 16:17:31.961736 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:31Z","lastTransitionTime":"2026-01-23T16:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.065092 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.065532 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.065652 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.065772 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.065870 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:32Z","lastTransitionTime":"2026-01-23T16:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.117815 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:30:30.266248857 +0000 UTC Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.168766 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.168834 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.168855 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.168885 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.168907 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:32Z","lastTransitionTime":"2026-01-23T16:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.271390 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.271478 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.271499 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.271544 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.271565 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:32Z","lastTransitionTime":"2026-01-23T16:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.375025 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.375105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.375125 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.375157 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.375177 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:32Z","lastTransitionTime":"2026-01-23T16:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.478673 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.478747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.478771 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.478809 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.478849 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:32Z","lastTransitionTime":"2026-01-23T16:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.581272 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.581354 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.581374 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.581400 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.581419 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:32Z","lastTransitionTime":"2026-01-23T16:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.684834 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.684945 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.684971 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.685004 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.685041 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:32Z","lastTransitionTime":"2026-01-23T16:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.788030 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.788115 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.788141 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.788176 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.788201 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:32Z","lastTransitionTime":"2026-01-23T16:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.891889 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.891964 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.891983 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.892015 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.892036 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:32Z","lastTransitionTime":"2026-01-23T16:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.995087 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.995154 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.995174 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.995207 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:32 crc kubenswrapper[4718]: I0123 16:17:32.995229 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:32Z","lastTransitionTime":"2026-01-23T16:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.099020 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.099108 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.099125 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.099154 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.099173 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:33Z","lastTransitionTime":"2026-01-23T16:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.118540 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 20:23:26.944740596 +0000 UTC Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.141144 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.141216 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:33 crc kubenswrapper[4718]: E0123 16:17:33.141441 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.141478 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.141543 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:33 crc kubenswrapper[4718]: E0123 16:17:33.141757 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:33 crc kubenswrapper[4718]: E0123 16:17:33.142099 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:33 crc kubenswrapper[4718]: E0123 16:17:33.142201 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.203256 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.203331 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.203354 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.203387 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.203410 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:33Z","lastTransitionTime":"2026-01-23T16:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.307392 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.307485 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.307510 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.307540 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.307570 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:33Z","lastTransitionTime":"2026-01-23T16:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.411959 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.412025 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.412046 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.412073 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.412094 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:33Z","lastTransitionTime":"2026-01-23T16:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.439575 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.457456 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.468767 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.485784 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.513936 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.520077 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.520195 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.520222 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.520265 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.520300 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:33Z","lastTransitionTime":"2026-01-23T16:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.542715 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.570432 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.595361 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.618184 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.623745 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.623835 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.623860 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.623898 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.623925 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:33Z","lastTransitionTime":"2026-01-23T16:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.642413 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.664481 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.686568 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.709709 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.727141 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.728229 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.728292 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.728318 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.728355 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.728382 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:33Z","lastTransitionTime":"2026-01-23T16:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.761527 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"tions: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:17:10.403657 6142 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:26Z\\\",\\\"message\\\":\\\"reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:26.296431 6357 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 16:17:26.296521 6357 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296566 6357 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296656 6357 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296793 6357 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:17:26.297167 6357 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.779297 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dppxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"593a4237-c13e-4403-b139-f32b552ca770\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dppxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.797091 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.817270 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.832007 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.832082 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.832102 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.832133 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.832153 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:33Z","lastTransitionTime":"2026-01-23T16:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.837656 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.935616 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.935726 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.935747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.935774 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:33 crc kubenswrapper[4718]: I0123 16:17:33.935797 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:33Z","lastTransitionTime":"2026-01-23T16:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.039465 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.039546 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.039565 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.039596 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.039618 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:34Z","lastTransitionTime":"2026-01-23T16:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.119210 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:00:24.268792007 +0000 UTC Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.144158 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.144221 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.144239 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.144265 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.144287 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:34Z","lastTransitionTime":"2026-01-23T16:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.248049 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.248112 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.248131 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.248159 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.248417 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:34Z","lastTransitionTime":"2026-01-23T16:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.352840 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.352935 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.352958 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.352994 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.353018 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:34Z","lastTransitionTime":"2026-01-23T16:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.457314 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.457388 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.457407 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.457437 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.457458 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:34Z","lastTransitionTime":"2026-01-23T16:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.561099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.561149 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.561159 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.561178 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.561194 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:34Z","lastTransitionTime":"2026-01-23T16:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.663606 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.663696 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.663716 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.663749 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.663772 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:34Z","lastTransitionTime":"2026-01-23T16:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.766495 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.766555 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.766572 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.766598 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.766615 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:34Z","lastTransitionTime":"2026-01-23T16:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.868946 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.869018 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.869043 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.869080 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.869104 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:34Z","lastTransitionTime":"2026-01-23T16:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.972217 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.972264 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.972273 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.972291 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:34 crc kubenswrapper[4718]: I0123 16:17:34.972302 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:34Z","lastTransitionTime":"2026-01-23T16:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.075277 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.075352 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.075376 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.075407 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.075429 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:35Z","lastTransitionTime":"2026-01-23T16:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.120254 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 16:25:11.065775438 +0000 UTC Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.139905 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:35 crc kubenswrapper[4718]: E0123 16:17:35.140147 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.141274 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.141362 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.141373 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:35 crc kubenswrapper[4718]: E0123 16:17:35.141541 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:35 crc kubenswrapper[4718]: E0123 16:17:35.141657 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:35 crc kubenswrapper[4718]: E0123 16:17:35.141721 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.179294 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.179369 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.179390 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.179416 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.179437 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:35Z","lastTransitionTime":"2026-01-23T16:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.282005 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.282058 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.282071 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.282093 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.282105 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:35Z","lastTransitionTime":"2026-01-23T16:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.385296 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.385372 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.385391 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.385422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.385443 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:35Z","lastTransitionTime":"2026-01-23T16:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.489218 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.489289 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.489310 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.489340 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.489360 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:35Z","lastTransitionTime":"2026-01-23T16:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.592503 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.592577 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.592594 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.592624 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.592671 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:35Z","lastTransitionTime":"2026-01-23T16:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.695379 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.695464 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.695487 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.695521 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.695545 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:35Z","lastTransitionTime":"2026-01-23T16:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.798048 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.798093 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.798102 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.798119 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.798132 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:35Z","lastTransitionTime":"2026-01-23T16:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.901897 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.901967 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.901987 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.902015 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:35 crc kubenswrapper[4718]: I0123 16:17:35.902036 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:35Z","lastTransitionTime":"2026-01-23T16:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.005192 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.005264 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.005280 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.005300 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.005312 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:36Z","lastTransitionTime":"2026-01-23T16:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.108288 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.108372 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.108397 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.108428 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.108452 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:36Z","lastTransitionTime":"2026-01-23T16:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.120880 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 13:57:04.430014788 +0000 UTC Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.212000 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.212046 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.212056 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.212070 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.212080 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:36Z","lastTransitionTime":"2026-01-23T16:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.315720 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.315822 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.315847 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.315883 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.315905 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:36Z","lastTransitionTime":"2026-01-23T16:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.420032 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.420106 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.420127 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.420154 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.420174 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:36Z","lastTransitionTime":"2026-01-23T16:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.525132 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.525223 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.525249 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.525286 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.525310 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:36Z","lastTransitionTime":"2026-01-23T16:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.628195 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.628282 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.628303 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.628339 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.628359 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:36Z","lastTransitionTime":"2026-01-23T16:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.732098 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.732169 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.732187 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.732213 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.732234 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:36Z","lastTransitionTime":"2026-01-23T16:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.836427 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.836496 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.836513 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.836542 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.836560 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:36Z","lastTransitionTime":"2026-01-23T16:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.940232 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.940291 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.940312 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.940340 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:36 crc kubenswrapper[4718]: I0123 16:17:36.940359 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:36Z","lastTransitionTime":"2026-01-23T16:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.043138 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.043226 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.043251 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.043286 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.043309 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:37Z","lastTransitionTime":"2026-01-23T16:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.121172 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 04:17:58.459075995 +0000 UTC Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.139957 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.140161 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:37 crc kubenswrapper[4718]: E0123 16:17:37.140587 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.140746 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.140814 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:37 crc kubenswrapper[4718]: E0123 16:17:37.140949 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:37 crc kubenswrapper[4718]: E0123 16:17:37.141368 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:37 crc kubenswrapper[4718]: E0123 16:17:37.142028 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.146927 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.147008 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.147027 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.147072 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.147091 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:37Z","lastTransitionTime":"2026-01-23T16:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.249918 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.250011 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.250032 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.250067 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.250089 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:37Z","lastTransitionTime":"2026-01-23T16:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.354686 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.354786 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.354806 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.354833 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.354854 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:37Z","lastTransitionTime":"2026-01-23T16:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.458587 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.459248 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.459276 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.459312 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.459338 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:37Z","lastTransitionTime":"2026-01-23T16:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.562667 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.562724 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.562746 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.562774 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.562795 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:37Z","lastTransitionTime":"2026-01-23T16:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.666592 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.666728 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.666753 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.666776 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.666795 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:37Z","lastTransitionTime":"2026-01-23T16:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.771125 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.771817 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.771867 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.771902 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.771924 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:37Z","lastTransitionTime":"2026-01-23T16:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.875722 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.875788 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.875805 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.875833 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.875851 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:37Z","lastTransitionTime":"2026-01-23T16:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.979357 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.979425 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.979436 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.979459 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:37 crc kubenswrapper[4718]: I0123 16:17:37.979473 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:37Z","lastTransitionTime":"2026-01-23T16:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.083505 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.083571 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.083581 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.083606 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.083618 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:38Z","lastTransitionTime":"2026-01-23T16:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.122267 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 07:55:20.658667086 +0000 UTC Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.186441 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.186518 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.186538 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.186566 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.186587 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:38Z","lastTransitionTime":"2026-01-23T16:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.290966 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.291049 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.291069 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.291101 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.291120 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:38Z","lastTransitionTime":"2026-01-23T16:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.394357 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.394422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.394440 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.394468 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.394489 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:38Z","lastTransitionTime":"2026-01-23T16:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.498089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.498550 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.498734 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.498941 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.499103 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:38Z","lastTransitionTime":"2026-01-23T16:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.602052 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.602120 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.602139 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.602163 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.602181 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:38Z","lastTransitionTime":"2026-01-23T16:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.705240 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.705305 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.705323 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.705346 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.705361 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:38Z","lastTransitionTime":"2026-01-23T16:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.812995 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.813061 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.813081 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.813112 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.813140 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:38Z","lastTransitionTime":"2026-01-23T16:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.916755 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.916823 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.916840 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.916866 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:38 crc kubenswrapper[4718]: I0123 16:17:38.916884 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:38Z","lastTransitionTime":"2026-01-23T16:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.020391 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.020477 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.020498 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.020564 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.020586 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:39Z","lastTransitionTime":"2026-01-23T16:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.123200 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 06:45:09.412645286 +0000 UTC Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.124683 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.124763 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.124781 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.124810 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.124832 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:39Z","lastTransitionTime":"2026-01-23T16:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.139703 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.139711 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.139808 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.139920 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:39 crc kubenswrapper[4718]: E0123 16:17:39.140203 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:39 crc kubenswrapper[4718]: E0123 16:17:39.141339 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:39 crc kubenswrapper[4718]: E0123 16:17:39.145094 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:39 crc kubenswrapper[4718]: E0123 16:17:39.145131 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.145868 4718 scope.go:117] "RemoveContainer" containerID="b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6" Jan 23 16:17:39 crc kubenswrapper[4718]: E0123 16:17:39.146687 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.166545 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.191572 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.213861 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.228467 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.228549 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.228570 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.228600 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.228620 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:39Z","lastTransitionTime":"2026-01-23T16:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.236352 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.265019 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.281131 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.314075 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://609e67b871c024a73b3ed434051ba04458e3ca2522b8624c7d1a3497154e0f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:11Z\\\",\\\"message\\\":\\\"tions: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:10Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:17:10.403657 6142 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:26Z\\\",\\\"message\\\":\\\"reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:26.296431 6357 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 16:17:26.296521 6357 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296566 6357 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296656 6357 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296793 6357 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:17:26.297167 6357 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.332480 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dppxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"593a4237-c13e-4403-b139-f32b552ca770\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dppxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.333083 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.333182 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.333202 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.333233 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.333254 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:39Z","lastTransitionTime":"2026-01-23T16:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.352591 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a504457b-f3f5-432d-81f8-0c1dd6499ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b207b699cf31af5be9d845bc3ec1c19a61f4b87788e3dee4314b0a36104c710e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81893e5afcb0a719b0edb43b630ba52d95210ac4b3c57c5c8b3f2a9676a06d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27ff168d50068ac4e86f6b706be71d8eee9bc268b6d2855b366b5cd3eb4740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.375290 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.393273 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.412778 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.438583 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.438699 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.438724 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.438793 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.438813 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:39Z","lastTransitionTime":"2026-01-23T16:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.444960 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.467039 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.486394 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.508038 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.535091 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.542088 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.542150 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.542179 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.542214 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.542241 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:39Z","lastTransitionTime":"2026-01-23T16:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.557589 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.581799 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.601904 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.621014 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.639333 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.650266 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.650343 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.650369 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.650396 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.650417 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:39Z","lastTransitionTime":"2026-01-23T16:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.658753 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.673219 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.707983 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:26Z\\\",\\\"message\\\":\\\"reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:26.296431 6357 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 16:17:26.296521 6357 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296566 6357 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296656 6357 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296793 6357 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:17:26.297167 6357 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.727472 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dppxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"593a4237-c13e-4403-b139-f32b552ca770\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dppxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.748322 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a504457b-f3f5-432d-81f8-0c1dd6499ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b207b699cf31af5be9d845bc3ec1c19a61f4b87788e3dee4314b0a36104c710e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81893e5afcb0a719b0edb43b630ba52d95210ac4b3c57c5c8b3f2a9676a06d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27ff168d50068ac4e86f6b706be71d8eee9bc268b6d2855b366b5cd3eb4740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.754613 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.754711 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.754730 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.754758 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.754778 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:39Z","lastTransitionTime":"2026-01-23T16:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.771773 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.793574 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.814202 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.840541 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.858442 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.858550 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.858569 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.858676 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.858701 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:39Z","lastTransitionTime":"2026-01-23T16:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.860341 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.894344 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.918755 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.940906 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.961922 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.961967 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.961986 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.962015 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.962565 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:39Z","lastTransitionTime":"2026-01-23T16:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:39 crc kubenswrapper[4718]: I0123 16:17:39.964134 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:39Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.066050 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.066112 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.066125 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.066145 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.066161 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:40Z","lastTransitionTime":"2026-01-23T16:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.123716 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:55:53.965688525 +0000 UTC Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.169424 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.169516 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.169544 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.169580 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.169607 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:40Z","lastTransitionTime":"2026-01-23T16:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.272465 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.272544 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.272562 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.272590 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.272613 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:40Z","lastTransitionTime":"2026-01-23T16:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.375430 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.375498 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.375512 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.375538 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.375552 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:40Z","lastTransitionTime":"2026-01-23T16:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.478111 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.478193 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.478211 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.478239 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.478257 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:40Z","lastTransitionTime":"2026-01-23T16:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.581990 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.582069 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.582089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.582164 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.582193 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:40Z","lastTransitionTime":"2026-01-23T16:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.686300 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.686410 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.686430 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.686462 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.686482 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:40Z","lastTransitionTime":"2026-01-23T16:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.789883 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.790315 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.790422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.790715 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.790838 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:40Z","lastTransitionTime":"2026-01-23T16:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.894439 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.894523 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.894542 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.894574 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.894594 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:40Z","lastTransitionTime":"2026-01-23T16:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.997608 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.997954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.998051 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.998152 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:40 crc kubenswrapper[4718]: I0123 16:17:40.998297 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:40Z","lastTransitionTime":"2026-01-23T16:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.101008 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.101341 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.101441 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.101548 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.101662 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:41Z","lastTransitionTime":"2026-01-23T16:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.124625 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 16:43:43.838526912 +0000 UTC Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.139369 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.139530 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.139467 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.139438 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:41 crc kubenswrapper[4718]: E0123 16:17:41.139898 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:41 crc kubenswrapper[4718]: E0123 16:17:41.139975 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:41 crc kubenswrapper[4718]: E0123 16:17:41.140098 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:41 crc kubenswrapper[4718]: E0123 16:17:41.140185 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.203964 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.204279 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.204439 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.204571 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.204733 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:41Z","lastTransitionTime":"2026-01-23T16:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.312740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.313314 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.313537 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.313872 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.314093 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:41Z","lastTransitionTime":"2026-01-23T16:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.418781 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.418868 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.418886 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.418917 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.418937 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:41Z","lastTransitionTime":"2026-01-23T16:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.523114 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.523219 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.523240 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.523267 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.523286 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:41Z","lastTransitionTime":"2026-01-23T16:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.627144 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.627216 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.627235 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.627260 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.627276 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:41Z","lastTransitionTime":"2026-01-23T16:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.730886 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.730966 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.730985 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.731013 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.731035 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:41Z","lastTransitionTime":"2026-01-23T16:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.834734 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.834817 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.834845 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.834881 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.834909 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:41Z","lastTransitionTime":"2026-01-23T16:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.937393 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.937451 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.937469 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.937493 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:41 crc kubenswrapper[4718]: I0123 16:17:41.937513 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:41Z","lastTransitionTime":"2026-01-23T16:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.040273 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.040331 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.040345 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.040368 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.040385 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:42Z","lastTransitionTime":"2026-01-23T16:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.125479 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 21:15:58.170578278 +0000 UTC Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.142722 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.142760 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.142771 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.142788 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.142804 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:42Z","lastTransitionTime":"2026-01-23T16:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.246568 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.246687 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.246713 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.246747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.246772 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:42Z","lastTransitionTime":"2026-01-23T16:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.248890 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.249038 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.249062 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.249091 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.249150 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:42Z","lastTransitionTime":"2026-01-23T16:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:42 crc kubenswrapper[4718]: E0123 16:17:42.270021 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.276739 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.276815 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.276838 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.276866 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.276886 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:42Z","lastTransitionTime":"2026-01-23T16:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:42 crc kubenswrapper[4718]: E0123 16:17:42.292824 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.297456 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.297742 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.297882 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.298037 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.298176 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:42Z","lastTransitionTime":"2026-01-23T16:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:42 crc kubenswrapper[4718]: E0123 16:17:42.316720 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.321395 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.321607 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.321857 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.322017 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.322169 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:42Z","lastTransitionTime":"2026-01-23T16:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:42 crc kubenswrapper[4718]: E0123 16:17:42.340994 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.345359 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.345603 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.345844 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.346000 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.346138 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:42Z","lastTransitionTime":"2026-01-23T16:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:42 crc kubenswrapper[4718]: E0123 16:17:42.363038 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:42 crc kubenswrapper[4718]: E0123 16:17:42.363163 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.365129 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.365160 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.365190 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.365208 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.365221 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:42Z","lastTransitionTime":"2026-01-23T16:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.467563 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.467672 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.467698 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.467730 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.467755 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:42Z","lastTransitionTime":"2026-01-23T16:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.570574 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.570705 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.570726 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.570755 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.570773 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:42Z","lastTransitionTime":"2026-01-23T16:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.674017 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.674092 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.674104 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.674126 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.674137 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:42Z","lastTransitionTime":"2026-01-23T16:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.776621 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.776970 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.777129 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.777290 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.777438 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:42Z","lastTransitionTime":"2026-01-23T16:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.880702 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.880806 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.880842 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.880877 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.880901 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:42Z","lastTransitionTime":"2026-01-23T16:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.984515 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.984613 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.984690 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.984731 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:42 crc kubenswrapper[4718]: I0123 16:17:42.984759 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:42Z","lastTransitionTime":"2026-01-23T16:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.088109 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.088591 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.088702 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.088808 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.088978 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:43Z","lastTransitionTime":"2026-01-23T16:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.126669 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 11:32:17.76762749 +0000 UTC Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.140140 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.140208 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.140252 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:43 crc kubenswrapper[4718]: E0123 16:17:43.140263 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.140224 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:43 crc kubenswrapper[4718]: E0123 16:17:43.140443 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:43 crc kubenswrapper[4718]: E0123 16:17:43.140534 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:43 crc kubenswrapper[4718]: E0123 16:17:43.140599 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.192790 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.192885 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.192912 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.192993 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.193035 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:43Z","lastTransitionTime":"2026-01-23T16:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.296103 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.296172 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.296183 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.296200 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.296212 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:43Z","lastTransitionTime":"2026-01-23T16:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.399061 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.399150 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.399173 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.399206 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.399230 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:43Z","lastTransitionTime":"2026-01-23T16:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.502438 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.502494 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.502508 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.502529 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.502545 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:43Z","lastTransitionTime":"2026-01-23T16:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.605442 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.605517 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.605527 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.605546 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.605562 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:43Z","lastTransitionTime":"2026-01-23T16:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.708954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.709007 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.709016 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.709037 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.709049 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:43Z","lastTransitionTime":"2026-01-23T16:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.811320 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.811379 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.811391 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.811411 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.811423 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:43Z","lastTransitionTime":"2026-01-23T16:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.914709 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.914778 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.914798 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.914831 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:43 crc kubenswrapper[4718]: I0123 16:17:43.914849 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:43Z","lastTransitionTime":"2026-01-23T16:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.016784 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.016832 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.016845 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.016861 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.016873 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:44Z","lastTransitionTime":"2026-01-23T16:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.120264 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.120342 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.120367 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.120396 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.120414 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:44Z","lastTransitionTime":"2026-01-23T16:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.127783 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 02:51:55.14203851 +0000 UTC Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.223871 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.223927 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.223939 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.223962 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.223976 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:44Z","lastTransitionTime":"2026-01-23T16:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.327656 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.327718 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.327735 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.327758 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.327771 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:44Z","lastTransitionTime":"2026-01-23T16:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.430364 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.430441 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.430461 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.430486 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.430505 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:44Z","lastTransitionTime":"2026-01-23T16:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.533735 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.533811 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.533831 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.533862 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.533881 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:44Z","lastTransitionTime":"2026-01-23T16:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.635985 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.636083 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.636126 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.636163 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.636196 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:44Z","lastTransitionTime":"2026-01-23T16:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.739365 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.739420 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.739432 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.739454 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.739472 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:44Z","lastTransitionTime":"2026-01-23T16:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.843057 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.843108 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.843119 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.843136 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.843150 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:44Z","lastTransitionTime":"2026-01-23T16:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.945314 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.945362 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.945372 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.945390 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:44 crc kubenswrapper[4718]: I0123 16:17:44.945400 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:44Z","lastTransitionTime":"2026-01-23T16:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.048849 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.049254 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.049363 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.049473 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.049577 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:45Z","lastTransitionTime":"2026-01-23T16:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.128399 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 10:47:30.377689483 +0000 UTC Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.139987 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.140039 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.140019 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.139995 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:45 crc kubenswrapper[4718]: E0123 16:17:45.140191 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:45 crc kubenswrapper[4718]: E0123 16:17:45.140325 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:45 crc kubenswrapper[4718]: E0123 16:17:45.140383 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:45 crc kubenswrapper[4718]: E0123 16:17:45.140484 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.151463 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.151541 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.151560 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.151588 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.151611 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:45Z","lastTransitionTime":"2026-01-23T16:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.237556 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs\") pod \"network-metrics-daemon-dppxp\" (UID: \"593a4237-c13e-4403-b139-f32b552ca770\") " pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:45 crc kubenswrapper[4718]: E0123 16:17:45.237854 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:17:45 crc kubenswrapper[4718]: E0123 16:17:45.237941 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs podName:593a4237-c13e-4403-b139-f32b552ca770 nodeName:}" failed. No retries permitted until 2026-01-23 16:18:17.237916105 +0000 UTC m=+98.385158096 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs") pod "network-metrics-daemon-dppxp" (UID: "593a4237-c13e-4403-b139-f32b552ca770") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.255119 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.255198 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.255213 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.255238 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.255255 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:45Z","lastTransitionTime":"2026-01-23T16:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.358242 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.358572 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.358710 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.358816 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.358906 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:45Z","lastTransitionTime":"2026-01-23T16:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.461574 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.461618 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.461647 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.461666 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.461680 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:45Z","lastTransitionTime":"2026-01-23T16:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.565091 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.565144 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.565159 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.565182 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.565203 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:45Z","lastTransitionTime":"2026-01-23T16:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.632459 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tb79v_d2a07769-1921-4484-b1cd-28b23487bb39/kube-multus/0.log" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.632526 4718 generic.go:334] "Generic (PLEG): container finished" podID="d2a07769-1921-4484-b1cd-28b23487bb39" containerID="7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c" exitCode=1 Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.632563 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tb79v" event={"ID":"d2a07769-1921-4484-b1cd-28b23487bb39","Type":"ContainerDied","Data":"7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c"} Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.633037 4718 scope.go:117] "RemoveContainer" containerID="7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.645434 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:45Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.667402 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:26Z\\\",\\\"message\\\":\\\"reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:26.296431 6357 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 16:17:26.296521 6357 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296566 6357 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296656 6357 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296793 6357 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:17:26.297167 6357 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:45Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.668357 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.668405 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.668415 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.668435 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.668448 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:45Z","lastTransitionTime":"2026-01-23T16:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.679247 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dppxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"593a4237-c13e-4403-b139-f32b552ca770\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dppxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:45Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.690324 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a504457b-f3f5-432d-81f8-0c1dd6499ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b207b699cf31af5be9d845bc3ec1c19a61f4b87788e3dee4314b0a36104c710e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81893e5afcb0a719b0edb43b630ba52d95210ac4b3c57c5c8b3f2a9676a06d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27ff168d50068ac4e86f6b706be71d8eee9bc268b6d2855b366b5cd3eb4740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:45Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.703359 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:45Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.714799 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:45Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.725581 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:45Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.744334 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:45Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.760824 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:45Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.773541 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:45Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.782041 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.782106 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.782124 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.782149 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.782168 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:45Z","lastTransitionTime":"2026-01-23T16:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.785605 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:45Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.800007 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:45Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.813007 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:45Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.828876 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:45Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.843592 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:45Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.858218 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:45Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.871577 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:45Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.885053 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.885096 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.885110 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.885131 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.885143 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:45Z","lastTransitionTime":"2026-01-23T16:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.885446 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:45Z\\\",\\\"message\\\":\\\"2026-01-23T16:17:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0523c138-83ca-423b-b4d4-f924dc31a168\\\\n2026-01-23T16:17:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0523c138-83ca-423b-b4d4-f924dc31a168 to /host/opt/cni/bin/\\\\n2026-01-23T16:17:00Z [verbose] multus-daemon started\\\\n2026-01-23T16:17:00Z [verbose] Readiness Indicator file check\\\\n2026-01-23T16:17:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:45Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.987489 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.987534 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.987543 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.987564 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:45 crc kubenswrapper[4718]: I0123 16:17:45.987577 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:45Z","lastTransitionTime":"2026-01-23T16:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.090569 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.090660 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.090680 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.090709 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.090728 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:46Z","lastTransitionTime":"2026-01-23T16:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.128893 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 07:12:43.205443197 +0000 UTC Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.194239 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.194299 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.194316 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.194342 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.194357 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:46Z","lastTransitionTime":"2026-01-23T16:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.296839 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.296907 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.296921 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.296943 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.296959 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:46Z","lastTransitionTime":"2026-01-23T16:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.399793 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.399869 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.399891 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.399922 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.399985 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:46Z","lastTransitionTime":"2026-01-23T16:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.502054 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.502103 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.502115 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.502138 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.502152 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:46Z","lastTransitionTime":"2026-01-23T16:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.604837 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.604893 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.604906 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.604926 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.604939 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:46Z","lastTransitionTime":"2026-01-23T16:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.637154 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tb79v_d2a07769-1921-4484-b1cd-28b23487bb39/kube-multus/0.log" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.637218 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tb79v" event={"ID":"d2a07769-1921-4484-b1cd-28b23487bb39","Type":"ContainerStarted","Data":"20405d329c7193cd8e7cb3a92d8ead168665e11059b3a658afb9533dad110282"} Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.650468 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.662423 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.673312 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a504457b-f3f5-432d-81f8-0c1dd6499ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b207b699cf31af5be9d845bc3ec1c19a61f4b87788e3dee4314b0a36104c710e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81893e5afcb0a719b0edb43b630ba52d95210ac4b3c57c5c8b3f2a9676a06d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27ff168d50068ac4e86f6b706be71d8eee9bc268b6d2855b366b5cd3eb4740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.684767 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.696033 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.709183 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.709220 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.709229 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.709247 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.709258 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:46Z","lastTransitionTime":"2026-01-23T16:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.711831 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.724288 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.737109 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.745594 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.761937 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.773683 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.784811 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.799696 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20405d329c7193cd8e7cb3a92d8ead168665e11059b3a658afb9533dad110282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:45Z\\\",\\\"message\\\":\\\"2026-01-23T16:17:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0523c138-83ca-423b-b4d4-f924dc31a168\\\\n2026-01-23T16:17:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0523c138-83ca-423b-b4d4-f924dc31a168 to /host/opt/cni/bin/\\\\n2026-01-23T16:17:00Z [verbose] multus-daemon started\\\\n2026-01-23T16:17:00Z [verbose] Readiness Indicator file check\\\\n2026-01-23T16:17:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.812279 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.812349 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.812370 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.812400 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.812423 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:46Z","lastTransitionTime":"2026-01-23T16:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.819970 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.833475 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.844988 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dppxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"593a4237-c13e-4403-b139-f32b552ca770\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dppxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.863157 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.904063 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:26Z\\\",\\\"message\\\":\\\"reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:26.296431 6357 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 16:17:26.296521 6357 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296566 6357 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296656 6357 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296793 6357 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:17:26.297167 6357 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.915401 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.915441 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.915453 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.915475 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:46 crc kubenswrapper[4718]: I0123 16:17:46.915488 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:46Z","lastTransitionTime":"2026-01-23T16:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.018143 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.018415 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.018546 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.018662 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.018750 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:47Z","lastTransitionTime":"2026-01-23T16:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.120742 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.120817 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.120865 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.120899 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.120922 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:47Z","lastTransitionTime":"2026-01-23T16:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.129942 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 22:47:40.261292377 +0000 UTC Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.139226 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.139242 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:47 crc kubenswrapper[4718]: E0123 16:17:47.139400 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.139440 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:47 crc kubenswrapper[4718]: E0123 16:17:47.139664 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.139713 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:47 crc kubenswrapper[4718]: E0123 16:17:47.139872 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:47 crc kubenswrapper[4718]: E0123 16:17:47.140015 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.223187 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.223250 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.223266 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.223289 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.223306 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:47Z","lastTransitionTime":"2026-01-23T16:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.326050 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.326090 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.326099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.326115 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.326125 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:47Z","lastTransitionTime":"2026-01-23T16:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.428585 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.428669 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.428686 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.428708 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.428725 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:47Z","lastTransitionTime":"2026-01-23T16:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.531438 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.531807 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.531938 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.532045 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.532134 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:47Z","lastTransitionTime":"2026-01-23T16:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.635528 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.635614 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.635671 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.635703 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.635726 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:47Z","lastTransitionTime":"2026-01-23T16:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.738374 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.738441 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.738461 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.738495 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.738519 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:47Z","lastTransitionTime":"2026-01-23T16:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.842421 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.842472 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.842485 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.842507 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.842522 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:47Z","lastTransitionTime":"2026-01-23T16:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.946274 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.946338 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.946355 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.946377 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:47 crc kubenswrapper[4718]: I0123 16:17:47.946391 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:47Z","lastTransitionTime":"2026-01-23T16:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.049385 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.049442 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.049454 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.049476 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.049489 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:48Z","lastTransitionTime":"2026-01-23T16:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.130991 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:37:05.642968083 +0000 UTC Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.153230 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.153307 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.153326 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.153353 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.153374 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:48Z","lastTransitionTime":"2026-01-23T16:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.257417 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.257525 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.257561 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.257594 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.257611 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:48Z","lastTransitionTime":"2026-01-23T16:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.361462 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.361512 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.361524 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.361544 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.361555 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:48Z","lastTransitionTime":"2026-01-23T16:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.464609 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.464689 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.464701 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.464725 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.464738 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:48Z","lastTransitionTime":"2026-01-23T16:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.568204 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.568258 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.568269 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.568287 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.568299 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:48Z","lastTransitionTime":"2026-01-23T16:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.670174 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.670238 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.670255 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.670321 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.670339 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:48Z","lastTransitionTime":"2026-01-23T16:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.772609 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.772677 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.772688 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.772707 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.772719 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:48Z","lastTransitionTime":"2026-01-23T16:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.876574 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.876618 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.876643 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.876659 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.876670 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:48Z","lastTransitionTime":"2026-01-23T16:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.978851 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.978904 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.978916 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.978937 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:48 crc kubenswrapper[4718]: I0123 16:17:48.978951 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:48Z","lastTransitionTime":"2026-01-23T16:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.081370 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.081427 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.081436 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.081455 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.081470 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:49Z","lastTransitionTime":"2026-01-23T16:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.132130 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:27:57.878261489 +0000 UTC Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.140808 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.140880 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.140928 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.140949 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:49 crc kubenswrapper[4718]: E0123 16:17:49.141086 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:49 crc kubenswrapper[4718]: E0123 16:17:49.141210 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:49 crc kubenswrapper[4718]: E0123 16:17:49.141340 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:49 crc kubenswrapper[4718]: E0123 16:17:49.141408 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.155687 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:49Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.168854 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:49Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.182443 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:49Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.184739 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.184799 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.184813 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.184836 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.184849 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:49Z","lastTransitionTime":"2026-01-23T16:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.195972 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:49Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.210691 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20405d329c7193cd8e7cb3a92d8ead168665e11059b3a658afb9533dad110282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:45Z\\\",\\\"message\\\":\\\"2026-01-23T16:17:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0523c138-83ca-423b-b4d4-f924dc31a168\\\\n2026-01-23T16:17:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0523c138-83ca-423b-b4d4-f924dc31a168 to /host/opt/cni/bin/\\\\n2026-01-23T16:17:00Z [verbose] multus-daemon started\\\\n2026-01-23T16:17:00Z [verbose] Readiness Indicator file check\\\\n2026-01-23T16:17:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:49Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.223466 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:49Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.256920 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:26Z\\\",\\\"message\\\":\\\"reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:26.296431 6357 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 16:17:26.296521 6357 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296566 6357 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296656 6357 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296793 6357 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:17:26.297167 6357 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:49Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.272346 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dppxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"593a4237-c13e-4403-b139-f32b552ca770\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dppxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:49Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.286879 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a504457b-f3f5-432d-81f8-0c1dd6499ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b207b699cf31af5be9d845bc3ec1c19a61f4b87788e3dee4314b0a36104c710e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81893e5afcb0a719b0edb43b630ba52d95210ac4b3c57c5c8b3f2a9676a06d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27ff168d50068ac4e86f6b706be71d8eee9bc268b6d2855b366b5cd3eb4740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:49Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.287372 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.287399 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.287410 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.287426 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.287438 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:49Z","lastTransitionTime":"2026-01-23T16:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.302435 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:49Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.315694 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:49Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.332457 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:49Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.352380 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:49Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.372878 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:49Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.387658 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:49Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.389191 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.389214 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.389222 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.389254 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.389267 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:49Z","lastTransitionTime":"2026-01-23T16:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.404135 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:49Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.426008 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:49Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.438059 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:49Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.492052 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.492110 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.492125 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.492147 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.492164 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:49Z","lastTransitionTime":"2026-01-23T16:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.595384 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.595481 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.595527 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.595562 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.595588 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:49Z","lastTransitionTime":"2026-01-23T16:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.699434 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.699564 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.699581 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.700423 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.700553 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:49Z","lastTransitionTime":"2026-01-23T16:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.804302 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.804355 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.804372 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.804398 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.804414 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:49Z","lastTransitionTime":"2026-01-23T16:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.907844 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.907894 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.907905 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.907926 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:49 crc kubenswrapper[4718]: I0123 16:17:49.907938 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:49Z","lastTransitionTime":"2026-01-23T16:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.010813 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.010890 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.010916 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.010956 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.011014 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:50Z","lastTransitionTime":"2026-01-23T16:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.115057 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.115126 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.115135 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.115158 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.115169 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:50Z","lastTransitionTime":"2026-01-23T16:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.132722 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 03:06:25.282713167 +0000 UTC Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.217428 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.217481 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.217499 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.217517 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.217529 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:50Z","lastTransitionTime":"2026-01-23T16:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.320034 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.320077 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.320085 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.320104 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.320115 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:50Z","lastTransitionTime":"2026-01-23T16:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.422988 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.423135 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.423147 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.423176 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.423191 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:50Z","lastTransitionTime":"2026-01-23T16:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.526582 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.526696 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.526717 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.526748 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.526773 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:50Z","lastTransitionTime":"2026-01-23T16:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.629430 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.629474 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.629483 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.629502 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.629513 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:50Z","lastTransitionTime":"2026-01-23T16:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.732268 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.732319 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.732333 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.732351 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.732361 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:50Z","lastTransitionTime":"2026-01-23T16:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.835786 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.835992 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.836012 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.836039 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.836060 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:50Z","lastTransitionTime":"2026-01-23T16:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.939563 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.939624 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.939663 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.939685 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:50 crc kubenswrapper[4718]: I0123 16:17:50.939699 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:50Z","lastTransitionTime":"2026-01-23T16:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.042889 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.042953 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.042966 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.042991 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.043008 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:51Z","lastTransitionTime":"2026-01-23T16:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.133823 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:07:44.232903693 +0000 UTC Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.140237 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.140363 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:51 crc kubenswrapper[4718]: E0123 16:17:51.140433 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.140503 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:51 crc kubenswrapper[4718]: E0123 16:17:51.140661 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.140747 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:51 crc kubenswrapper[4718]: E0123 16:17:51.140953 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:51 crc kubenswrapper[4718]: E0123 16:17:51.141071 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.147403 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.147460 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.147480 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.147511 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.147532 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:51Z","lastTransitionTime":"2026-01-23T16:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.250949 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.251014 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.251024 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.251042 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.251054 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:51Z","lastTransitionTime":"2026-01-23T16:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.354763 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.354841 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.354856 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.354881 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.354895 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:51Z","lastTransitionTime":"2026-01-23T16:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.458312 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.458363 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.458378 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.458398 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.458411 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:51Z","lastTransitionTime":"2026-01-23T16:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.561112 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.561165 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.561174 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.561194 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.561207 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:51Z","lastTransitionTime":"2026-01-23T16:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.664031 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.664129 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.664147 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.664172 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.664194 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:51Z","lastTransitionTime":"2026-01-23T16:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.766987 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.767066 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.767082 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.767107 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.767123 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:51Z","lastTransitionTime":"2026-01-23T16:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.870309 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.870383 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.870405 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.870435 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.870454 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:51Z","lastTransitionTime":"2026-01-23T16:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.974235 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.974332 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.974355 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.974385 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:51 crc kubenswrapper[4718]: I0123 16:17:51.974403 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:51Z","lastTransitionTime":"2026-01-23T16:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.080427 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.080502 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.080524 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.080551 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.080570 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:52Z","lastTransitionTime":"2026-01-23T16:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.134949 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 18:22:55.847312112 +0000 UTC Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.183368 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.183422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.183441 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.183468 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.183485 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:52Z","lastTransitionTime":"2026-01-23T16:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.286423 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.286508 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.286525 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.286552 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.286573 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:52Z","lastTransitionTime":"2026-01-23T16:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.390014 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.390125 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.390144 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.390174 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.390192 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:52Z","lastTransitionTime":"2026-01-23T16:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.487747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.487826 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.487846 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.487876 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.487978 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:52Z","lastTransitionTime":"2026-01-23T16:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:52 crc kubenswrapper[4718]: E0123 16:17:52.509710 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:52Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.515120 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.515179 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.515192 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.515218 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.515236 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:52Z","lastTransitionTime":"2026-01-23T16:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:52 crc kubenswrapper[4718]: E0123 16:17:52.535258 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:52Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.540347 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.540453 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.540480 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.540558 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.540588 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:52Z","lastTransitionTime":"2026-01-23T16:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:52 crc kubenswrapper[4718]: E0123 16:17:52.563700 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:52Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.569396 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.569476 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.569503 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.569537 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.569566 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:52Z","lastTransitionTime":"2026-01-23T16:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:52 crc kubenswrapper[4718]: E0123 16:17:52.589251 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:52Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.595213 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.595271 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.595283 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.595304 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.595317 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:52Z","lastTransitionTime":"2026-01-23T16:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:52 crc kubenswrapper[4718]: E0123 16:17:52.612429 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:52Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:52 crc kubenswrapper[4718]: E0123 16:17:52.612721 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.616112 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.616205 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.616225 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.616284 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.616303 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:52Z","lastTransitionTime":"2026-01-23T16:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.719330 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.719435 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.719459 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.719493 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.719518 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:52Z","lastTransitionTime":"2026-01-23T16:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.823171 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.823249 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.823268 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.823306 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.823335 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:52Z","lastTransitionTime":"2026-01-23T16:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.926005 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.926067 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.926079 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.926103 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:52 crc kubenswrapper[4718]: I0123 16:17:52.926118 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:52Z","lastTransitionTime":"2026-01-23T16:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.028999 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.029047 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.029056 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.029073 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.029086 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:53Z","lastTransitionTime":"2026-01-23T16:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.132028 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.132085 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.132106 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.132137 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.132159 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:53Z","lastTransitionTime":"2026-01-23T16:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.135470 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 09:05:25.622933861 +0000 UTC Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.145125 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:53 crc kubenswrapper[4718]: E0123 16:17:53.145332 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.145782 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.145924 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:53 crc kubenswrapper[4718]: E0123 16:17:53.146120 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.146222 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:53 crc kubenswrapper[4718]: E0123 16:17:53.146806 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:53 crc kubenswrapper[4718]: E0123 16:17:53.147059 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.147297 4718 scope.go:117] "RemoveContainer" containerID="b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.236278 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.236335 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.236354 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.236382 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.236402 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:53Z","lastTransitionTime":"2026-01-23T16:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.340064 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.340128 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.340139 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.340161 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.340175 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:53Z","lastTransitionTime":"2026-01-23T16:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.444072 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.444129 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.444209 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.444237 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.444259 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:53Z","lastTransitionTime":"2026-01-23T16:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.547960 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.548031 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.548049 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.548079 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.548101 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:53Z","lastTransitionTime":"2026-01-23T16:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.651729 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.652093 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.652275 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.652380 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.652528 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:53Z","lastTransitionTime":"2026-01-23T16:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.755372 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.755453 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.755474 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.755501 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.755524 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:53Z","lastTransitionTime":"2026-01-23T16:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.858167 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.858214 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.858228 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.858249 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.858264 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:53Z","lastTransitionTime":"2026-01-23T16:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.961127 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.961190 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.961211 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.961235 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:53 crc kubenswrapper[4718]: I0123 16:17:53.961254 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:53Z","lastTransitionTime":"2026-01-23T16:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.064306 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.064360 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.064373 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.064395 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.064409 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:54Z","lastTransitionTime":"2026-01-23T16:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.135925 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 02:13:33.059367629 +0000 UTC Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.167012 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.167079 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.167097 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.167127 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.167147 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:54Z","lastTransitionTime":"2026-01-23T16:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.270353 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.270422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.270434 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.270457 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.270472 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:54Z","lastTransitionTime":"2026-01-23T16:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.375440 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.375500 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.375512 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.375535 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.375548 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:54Z","lastTransitionTime":"2026-01-23T16:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.478242 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.478300 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.478312 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.478332 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.478346 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:54Z","lastTransitionTime":"2026-01-23T16:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.581404 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.581457 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.581469 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.581493 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.581509 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:54Z","lastTransitionTime":"2026-01-23T16:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.668290 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovnkube-controller/2.log" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.672080 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerStarted","Data":"35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32"} Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.672900 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.684240 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.684287 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.684300 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.684319 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.684337 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:54Z","lastTransitionTime":"2026-01-23T16:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.694255 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.712137 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.725070 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.741911 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.755704 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20405d329c7193cd8e7cb3a92d8ead168665e11059b3a658afb9533dad110282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:45Z\\\",\\\"message\\\":\\\"2026-01-23T16:17:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0523c138-83ca-423b-b4d4-f924dc31a168\\\\n2026-01-23T16:17:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0523c138-83ca-423b-b4d4-f924dc31a168 to /host/opt/cni/bin/\\\\n2026-01-23T16:17:00Z [verbose] multus-daemon started\\\\n2026-01-23T16:17:00Z [verbose] Readiness Indicator file check\\\\n2026-01-23T16:17:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.769854 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.787945 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.788003 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.788023 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.788070 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.788089 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:54Z","lastTransitionTime":"2026-01-23T16:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.791368 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:26Z\\\",\\\"message\\\":\\\"reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:26.296431 6357 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 16:17:26.296521 6357 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296566 6357 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296656 6357 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296793 6357 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:17:26.297167 6357 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.806271 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dppxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"593a4237-c13e-4403-b139-f32b552ca770\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dppxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.825339 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a504457b-f3f5-432d-81f8-0c1dd6499ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b207b699cf31af5be9d845bc3ec1c19a61f4b87788e3dee4314b0a36104c710e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81893e5afcb0a719b0edb43b630ba52d95210ac4b3c57c5c8b3f2a9676a06d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27ff168d50068ac4e86f6b706be71d8eee9bc268b6d2855b366b5cd3eb4740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.843263 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.854706 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.870848 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.897557 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.897660 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.897682 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.897713 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.897734 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:54Z","lastTransitionTime":"2026-01-23T16:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.905902 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.923406 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.941583 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.959980 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:54 crc kubenswrapper[4718]: I0123 16:17:54.983532 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.000968 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.001437 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.001488 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.001501 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.001522 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.001536 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:55Z","lastTransitionTime":"2026-01-23T16:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.104891 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.104955 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.104966 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.104989 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.105006 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:55Z","lastTransitionTime":"2026-01-23T16:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.136268 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:15:24.829245809 +0000 UTC Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.140004 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.140042 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.140019 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.140173 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:55 crc kubenswrapper[4718]: E0123 16:17:55.140399 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:55 crc kubenswrapper[4718]: E0123 16:17:55.140719 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:55 crc kubenswrapper[4718]: E0123 16:17:55.140807 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:55 crc kubenswrapper[4718]: E0123 16:17:55.140603 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.207378 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.207419 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.207427 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.207443 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.207454 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:55Z","lastTransitionTime":"2026-01-23T16:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.310729 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.310800 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.310818 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.310847 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.310868 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:55Z","lastTransitionTime":"2026-01-23T16:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.414914 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.415006 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.415028 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.415060 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.415081 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:55Z","lastTransitionTime":"2026-01-23T16:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.519052 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.519147 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.519166 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.519197 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.519219 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:55Z","lastTransitionTime":"2026-01-23T16:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.623481 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.623584 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.623603 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.623682 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.623713 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:55Z","lastTransitionTime":"2026-01-23T16:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.679072 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovnkube-controller/3.log" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.680128 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovnkube-controller/2.log" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.683466 4718 generic.go:334] "Generic (PLEG): container finished" podID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerID="35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32" exitCode=1 Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.683556 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerDied","Data":"35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32"} Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.683660 4718 scope.go:117] "RemoveContainer" containerID="b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.685330 4718 scope.go:117] "RemoveContainer" containerID="35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32" Jan 23 16:17:55 crc kubenswrapper[4718]: E0123 16:17:55.686026 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.711969 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a504457b-f3f5-432d-81f8-0c1dd6499ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b207b699cf31af5be9d845bc3ec1c19a61f4b87788e3dee4314b0a36104c710e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81893e5afcb0a719b0edb43b630ba52d95210ac4b3c57c5c8b3f2a9676a06d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27ff168d50068ac4e86f6b706be71d8eee9bc268b6d2855b366b5cd3eb4740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:55Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.726616 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.726709 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.726731 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.726762 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.726784 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:55Z","lastTransitionTime":"2026-01-23T16:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.737580 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:55Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.758894 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:55Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.779985 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:55Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.809305 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:55Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.829767 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:55Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.830256 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.830386 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.830406 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.830435 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.830454 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:55Z","lastTransitionTime":"2026-01-23T16:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.850472 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:55Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.870000 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:55Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.893396 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:55Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.910590 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:55Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.932506 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:55Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.935311 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.935487 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.935551 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.935588 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.935612 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:55Z","lastTransitionTime":"2026-01-23T16:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.953480 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:55Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.974748 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:55Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:55 crc kubenswrapper[4718]: I0123 16:17:55.994803 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:55Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.018907 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20405d329c7193cd8e7cb3a92d8ead168665e11059b3a658afb9533dad110282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:45Z\\\",\\\"message\\\":\\\"2026-01-23T16:17:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0523c138-83ca-423b-b4d4-f924dc31a168\\\\n2026-01-23T16:17:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0523c138-83ca-423b-b4d4-f924dc31a168 to /host/opt/cni/bin/\\\\n2026-01-23T16:17:00Z [verbose] multus-daemon started\\\\n2026-01-23T16:17:00Z [verbose] Readiness Indicator file check\\\\n2026-01-23T16:17:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.033393 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.038727 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.038788 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.038807 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.038832 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.038851 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:56Z","lastTransitionTime":"2026-01-23T16:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.060342 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7138a7e2f60232b29542b1e76cc9ac79391a9777a66c016404649b5bf0324e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:26Z\\\",\\\"message\\\":\\\"reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:26.296431 6357 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 16:17:26.296521 6357 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296566 6357 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296656 6357 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:17:26.296793 6357 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:17:26.297167 6357 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:54Z\\\",\\\"message\\\":\\\"ler.go:208] Removed *v1.Node event handler 7\\\\nI0123 16:17:54.474120 6746 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:17:54.474120 6746 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474127 6746 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 16:17:54.474178 6746 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 16:17:54.474164 6746 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474243 6746 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474203 6746 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474378 6746 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.475017 6746 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 16:17:54.475050 6746 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 16:17:54.475081 6746 factory.go:656] Stopping watch factory\\\\nI0123 16:17:54.475095 6746 ovnkube.go:599] Stopped ovnkube\\\\nI0123 16:17:54.475124 6746 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 16:17:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.079120 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dppxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"593a4237-c13e-4403-b139-f32b552ca770\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dppxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.137401 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 20:01:22.667189474 +0000 UTC Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.142066 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.142119 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.142138 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.142163 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.142202 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:56Z","lastTransitionTime":"2026-01-23T16:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.245066 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.245145 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.245171 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.245204 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.245227 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:56Z","lastTransitionTime":"2026-01-23T16:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.349572 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.349692 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.349713 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.349741 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.349760 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:56Z","lastTransitionTime":"2026-01-23T16:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.453910 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.453955 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.453966 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.453991 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.454005 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:56Z","lastTransitionTime":"2026-01-23T16:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.557329 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.557392 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.557407 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.557443 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.557462 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:56Z","lastTransitionTime":"2026-01-23T16:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.661904 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.661982 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.662001 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.662029 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.662053 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:56Z","lastTransitionTime":"2026-01-23T16:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.693232 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovnkube-controller/3.log" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.699920 4718 scope.go:117] "RemoveContainer" containerID="35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32" Jan 23 16:17:56 crc kubenswrapper[4718]: E0123 16:17:56.700717 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.719145 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a504457b-f3f5-432d-81f8-0c1dd6499ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b207b699cf31af5be9d845bc3ec1c19a61f4b87788e3dee4314b0a36104c710e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81893e5afcb0a719b0edb43b630ba52d95210ac4b3c57c5c8b3f2a9676a06d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27ff168d50068ac4e86f6b706be71d8eee9bc268b6d2855b366b5cd3eb4740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.737471 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.753617 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.765511 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.765641 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.765653 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.765673 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.765685 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:56Z","lastTransitionTime":"2026-01-23T16:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.769845 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.783785 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.810431 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.832249 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.852718 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.868916 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.868996 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.869016 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.869050 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.869071 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:56Z","lastTransitionTime":"2026-01-23T16:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.875218 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.900957 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.924843 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.943347 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.962853 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.972665 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.973196 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.973377 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.973535 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.973709 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:56Z","lastTransitionTime":"2026-01-23T16:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:56 crc kubenswrapper[4718]: I0123 16:17:56.981554 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.004533 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20405d329c7193cd8e7cb3a92d8ead168665e11059b3a658afb9533dad110282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:45Z\\\",\\\"message\\\":\\\"2026-01-23T16:17:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0523c138-83ca-423b-b4d4-f924dc31a168\\\\n2026-01-23T16:17:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0523c138-83ca-423b-b4d4-f924dc31a168 to /host/opt/cni/bin/\\\\n2026-01-23T16:17:00Z [verbose] multus-daemon started\\\\n2026-01-23T16:17:00Z [verbose] Readiness Indicator file check\\\\n2026-01-23T16:17:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:57Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.020612 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:57Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.055202 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:54Z\\\",\\\"message\\\":\\\"ler.go:208] Removed *v1.Node event handler 7\\\\nI0123 16:17:54.474120 6746 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:17:54.474120 6746 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474127 6746 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 16:17:54.474178 6746 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 16:17:54.474164 6746 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474243 6746 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474203 6746 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474378 6746 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.475017 6746 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 16:17:54.475050 6746 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 16:17:54.475081 6746 factory.go:656] Stopping watch factory\\\\nI0123 16:17:54.475095 6746 ovnkube.go:599] Stopped ovnkube\\\\nI0123 16:17:54.475124 6746 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 16:17:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:57Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.073136 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dppxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"593a4237-c13e-4403-b139-f32b552ca770\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dppxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:57Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.076533 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.076869 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.077133 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.077300 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.077441 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:57Z","lastTransitionTime":"2026-01-23T16:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.138211 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 06:31:55.210993996 +0000 UTC Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.140267 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.140189 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.140465 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.140551 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:57 crc kubenswrapper[4718]: E0123 16:17:57.140593 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:57 crc kubenswrapper[4718]: E0123 16:17:57.140884 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:57 crc kubenswrapper[4718]: E0123 16:17:57.141117 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:57 crc kubenswrapper[4718]: E0123 16:17:57.141786 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.180554 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.180627 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.180686 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.180714 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.180735 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:57Z","lastTransitionTime":"2026-01-23T16:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.285149 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.285249 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.285276 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.285313 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.285340 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:57Z","lastTransitionTime":"2026-01-23T16:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.389041 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.389099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.389123 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.389157 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.389179 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:57Z","lastTransitionTime":"2026-01-23T16:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.493919 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.493991 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.494009 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.494060 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.494083 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:57Z","lastTransitionTime":"2026-01-23T16:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.597844 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.598287 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.598428 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.598583 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.598741 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:57Z","lastTransitionTime":"2026-01-23T16:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.702160 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.702230 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.702249 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.702276 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.702295 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:57Z","lastTransitionTime":"2026-01-23T16:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.805896 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.805945 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.805960 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.805983 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.806000 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:57Z","lastTransitionTime":"2026-01-23T16:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.909525 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.909614 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.909663 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.909698 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:57 crc kubenswrapper[4718]: I0123 16:17:57.909717 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:57Z","lastTransitionTime":"2026-01-23T16:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.013701 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.013782 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.013801 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.013837 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.013861 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:58Z","lastTransitionTime":"2026-01-23T16:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.116852 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.116926 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.116949 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.116982 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.117005 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:58Z","lastTransitionTime":"2026-01-23T16:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.139382 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:10:44.352110699 +0000 UTC Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.219613 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.220050 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.220145 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.220251 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.220349 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:58Z","lastTransitionTime":"2026-01-23T16:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.323522 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.323580 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.323600 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.323651 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.323672 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:58Z","lastTransitionTime":"2026-01-23T16:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.427534 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.427604 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.427614 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.427661 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.427675 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:58Z","lastTransitionTime":"2026-01-23T16:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.530963 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.531025 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.531034 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.531055 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.531066 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:58Z","lastTransitionTime":"2026-01-23T16:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.634759 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.634814 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.634825 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.634845 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.634857 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:58Z","lastTransitionTime":"2026-01-23T16:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.737669 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.737721 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.737733 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.737754 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.737797 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:58Z","lastTransitionTime":"2026-01-23T16:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.840625 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.840701 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.840714 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.840737 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.840755 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:58Z","lastTransitionTime":"2026-01-23T16:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.944092 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.944160 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.944182 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.944213 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:58 crc kubenswrapper[4718]: I0123 16:17:58.944236 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:58Z","lastTransitionTime":"2026-01-23T16:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.048003 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.048047 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.048058 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.048077 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.048088 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:59Z","lastTransitionTime":"2026-01-23T16:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.139796 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 09:39:30.941104269 +0000 UTC Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.139908 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.139908 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.140098 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:17:59 crc kubenswrapper[4718]: E0123 16:17:59.140333 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.140475 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:17:59 crc kubenswrapper[4718]: E0123 16:17:59.140782 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:17:59 crc kubenswrapper[4718]: E0123 16:17:59.140892 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:17:59 crc kubenswrapper[4718]: E0123 16:17:59.141070 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.151053 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.151102 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.151111 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.151133 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.151149 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:59Z","lastTransitionTime":"2026-01-23T16:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.153450 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.166512 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.179393 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.192250 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20405d329c7193cd8e7cb3a92d8ead168665e11059b3a658afb9533dad110282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:45Z\\\",\\\"message\\\":\\\"2026-01-23T16:17:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0523c138-83ca-423b-b4d4-f924dc31a168\\\\n2026-01-23T16:17:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0523c138-83ca-423b-b4d4-f924dc31a168 to /host/opt/cni/bin/\\\\n2026-01-23T16:17:00Z [verbose] multus-daemon started\\\\n2026-01-23T16:17:00Z [verbose] Readiness Indicator file check\\\\n2026-01-23T16:17:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.211346 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.230912 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:54Z\\\",\\\"message\\\":\\\"ler.go:208] Removed *v1.Node event handler 7\\\\nI0123 16:17:54.474120 6746 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:17:54.474120 6746 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474127 6746 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 16:17:54.474178 6746 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 16:17:54.474164 6746 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474243 6746 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474203 6746 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474378 6746 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.475017 6746 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 16:17:54.475050 6746 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 16:17:54.475081 6746 factory.go:656] Stopping watch factory\\\\nI0123 16:17:54.475095 6746 ovnkube.go:599] Stopped ovnkube\\\\nI0123 16:17:54.475124 6746 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 16:17:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.243837 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dppxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"593a4237-c13e-4403-b139-f32b552ca770\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dppxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.254013 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.254057 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.254069 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.254089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.254101 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:59Z","lastTransitionTime":"2026-01-23T16:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.255239 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.266922 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.279801 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.289839 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.307998 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a504457b-f3f5-432d-81f8-0c1dd6499ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b207b699cf31af5be9d845bc3ec1c19a61f4b87788e3dee4314b0a36104c710e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81893e5afcb0a719b0edb43b630ba52d95210ac4b3c57c5c8b3f2a9676a06d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27ff168d50068ac4e86f6b706be71d8eee9bc268b6d2855b366b5cd3eb4740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.327952 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.345336 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.356714 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.356747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.356758 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.356777 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.356790 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:59Z","lastTransitionTime":"2026-01-23T16:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.366370 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.382990 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.408142 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.421748 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:17:59Z is after 2025-08-24T17:21:41Z" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.459384 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.459476 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.459506 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.459567 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.459596 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:59Z","lastTransitionTime":"2026-01-23T16:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.562573 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.562619 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.562654 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.562680 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.562701 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:59Z","lastTransitionTime":"2026-01-23T16:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.666022 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.666108 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.666128 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.666159 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.666181 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:59Z","lastTransitionTime":"2026-01-23T16:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.769352 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.769440 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.769458 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.769489 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.769510 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:59Z","lastTransitionTime":"2026-01-23T16:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.872595 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.872668 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.872682 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.872706 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.872719 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:59Z","lastTransitionTime":"2026-01-23T16:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.976049 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.976111 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.976123 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.976148 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:17:59 crc kubenswrapper[4718]: I0123 16:17:59.976165 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:17:59Z","lastTransitionTime":"2026-01-23T16:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.079574 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.079625 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.079653 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.079678 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.079693 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:00Z","lastTransitionTime":"2026-01-23T16:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.140715 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 03:38:46.982489092 +0000 UTC Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.184020 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.184111 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.184130 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.184166 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.184189 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:00Z","lastTransitionTime":"2026-01-23T16:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.288180 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.288269 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.288296 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.288331 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.288396 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:00Z","lastTransitionTime":"2026-01-23T16:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.392985 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.393053 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.393066 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.393089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.393103 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:00Z","lastTransitionTime":"2026-01-23T16:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.496230 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.496290 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.496306 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.496328 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.496343 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:00Z","lastTransitionTime":"2026-01-23T16:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.599108 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.599164 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.599178 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.599202 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.599218 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:00Z","lastTransitionTime":"2026-01-23T16:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.702713 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.702800 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.702821 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.702851 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.702872 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:00Z","lastTransitionTime":"2026-01-23T16:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.809070 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.809152 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.809174 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.809208 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.809230 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:00Z","lastTransitionTime":"2026-01-23T16:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.832567 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:00 crc kubenswrapper[4718]: E0123 16:18:00.832875 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.832824426 +0000 UTC m=+145.980066487 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.833170 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.833300 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:00 crc kubenswrapper[4718]: E0123 16:18:00.833452 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:18:00 crc kubenswrapper[4718]: E0123 16:18:00.833484 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:18:00 crc kubenswrapper[4718]: E0123 16:18:00.833557 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.833526562 +0000 UTC m=+145.980768763 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:18:00 crc kubenswrapper[4718]: E0123 16:18:00.833595 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.833576623 +0000 UTC m=+145.980818884 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.912366 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.912455 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.912482 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.912518 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.912544 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:00Z","lastTransitionTime":"2026-01-23T16:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.934133 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:00 crc kubenswrapper[4718]: I0123 16:18:00.934186 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:00 crc kubenswrapper[4718]: E0123 16:18:00.934299 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:18:00 crc kubenswrapper[4718]: E0123 16:18:00.934335 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:18:00 crc kubenswrapper[4718]: E0123 16:18:00.934351 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:18:00 crc kubenswrapper[4718]: E0123 16:18:00.934299 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:18:00 crc kubenswrapper[4718]: E0123 16:18:00.934412 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.934391821 +0000 UTC m=+146.081633832 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:18:00 crc kubenswrapper[4718]: E0123 16:18:00.934416 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:18:00 crc kubenswrapper[4718]: E0123 16:18:00.934427 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:18:00 crc kubenswrapper[4718]: E0123 16:18:00.934453 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.934445052 +0000 UTC m=+146.081687043 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.015838 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.015885 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.015899 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.015918 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.015931 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:01Z","lastTransitionTime":"2026-01-23T16:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.118858 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.118924 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.118942 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.118971 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.118993 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:01Z","lastTransitionTime":"2026-01-23T16:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.139812 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.139867 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.139896 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.139812 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:01 crc kubenswrapper[4718]: E0123 16:18:01.139959 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:01 crc kubenswrapper[4718]: E0123 16:18:01.140078 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:01 crc kubenswrapper[4718]: E0123 16:18:01.140143 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:01 crc kubenswrapper[4718]: E0123 16:18:01.140192 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.140905 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 23:10:52.868075677 +0000 UTC Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.222304 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.222360 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.222375 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.222396 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.222414 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:01Z","lastTransitionTime":"2026-01-23T16:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.325292 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.325364 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.325382 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.325419 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.325442 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:01Z","lastTransitionTime":"2026-01-23T16:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.428292 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.428368 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.428387 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.428415 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.428433 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:01Z","lastTransitionTime":"2026-01-23T16:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.531077 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.531132 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.531144 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.531163 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.531177 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:01Z","lastTransitionTime":"2026-01-23T16:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.634809 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.634878 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.634895 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.634924 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.634947 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:01Z","lastTransitionTime":"2026-01-23T16:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.737998 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.738059 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.738079 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.738105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.738125 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:01Z","lastTransitionTime":"2026-01-23T16:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.840925 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.841021 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.841040 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.841078 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.841104 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:01Z","lastTransitionTime":"2026-01-23T16:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.945021 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.945065 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.945078 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.945099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:01 crc kubenswrapper[4718]: I0123 16:18:01.945113 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:01Z","lastTransitionTime":"2026-01-23T16:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.048221 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.048842 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.048862 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.048891 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.048909 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:02Z","lastTransitionTime":"2026-01-23T16:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.142124 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 01:12:13.468778475 +0000 UTC Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.152206 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.152259 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.152271 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.152295 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.152308 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:02Z","lastTransitionTime":"2026-01-23T16:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.255578 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.255657 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.255669 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.255708 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.255720 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:02Z","lastTransitionTime":"2026-01-23T16:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.358491 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.358559 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.358577 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.358605 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.358624 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:02Z","lastTransitionTime":"2026-01-23T16:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.462997 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.463076 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.463101 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.463133 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.463164 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:02Z","lastTransitionTime":"2026-01-23T16:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.566949 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.567012 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.567030 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.567056 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.567073 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:02Z","lastTransitionTime":"2026-01-23T16:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.670417 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.670503 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.670523 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.670550 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.670576 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:02Z","lastTransitionTime":"2026-01-23T16:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.774264 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.774325 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.774343 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.774369 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.774388 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:02Z","lastTransitionTime":"2026-01-23T16:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.878350 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.878425 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.878452 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.878481 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.878501 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:02Z","lastTransitionTime":"2026-01-23T16:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.946495 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.946571 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.946589 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.946613 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.946669 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:02Z","lastTransitionTime":"2026-01-23T16:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:02 crc kubenswrapper[4718]: E0123 16:18:02.973619 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:02Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.979386 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.979449 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.979467 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.979495 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:02 crc kubenswrapper[4718]: I0123 16:18:02.979515 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:02Z","lastTransitionTime":"2026-01-23T16:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:03 crc kubenswrapper[4718]: E0123 16:18:03.000280 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:02Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.007197 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.007258 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.007276 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.007381 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.007405 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:03Z","lastTransitionTime":"2026-01-23T16:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:03 crc kubenswrapper[4718]: E0123 16:18:03.027882 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.033589 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.033744 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.033770 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.033844 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.033869 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:03Z","lastTransitionTime":"2026-01-23T16:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:03 crc kubenswrapper[4718]: E0123 16:18:03.056368 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.062135 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.062199 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.062217 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.062247 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.062267 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:03Z","lastTransitionTime":"2026-01-23T16:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:03 crc kubenswrapper[4718]: E0123 16:18:03.083014 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:03Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:03 crc kubenswrapper[4718]: E0123 16:18:03.083148 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.085626 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.085674 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.085684 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.085706 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.085723 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:03Z","lastTransitionTime":"2026-01-23T16:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.140406 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.140538 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.140461 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.140725 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:03 crc kubenswrapper[4718]: E0123 16:18:03.140916 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:03 crc kubenswrapper[4718]: E0123 16:18:03.141090 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:03 crc kubenswrapper[4718]: E0123 16:18:03.141367 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:03 crc kubenswrapper[4718]: E0123 16:18:03.141578 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.142338 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 07:07:28.127150863 +0000 UTC Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.189149 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.189216 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.189234 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.189260 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.189280 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:03Z","lastTransitionTime":"2026-01-23T16:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.293077 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.293156 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.293172 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.293200 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.293219 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:03Z","lastTransitionTime":"2026-01-23T16:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.396911 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.396974 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.396987 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.397009 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.397022 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:03Z","lastTransitionTime":"2026-01-23T16:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.500366 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.500417 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.500434 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.500457 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.500474 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:03Z","lastTransitionTime":"2026-01-23T16:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.603605 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.603689 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.603702 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.603727 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.603743 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:03Z","lastTransitionTime":"2026-01-23T16:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.708138 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.708221 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.708240 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.708271 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.708292 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:03Z","lastTransitionTime":"2026-01-23T16:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.812327 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.812393 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.812415 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.812444 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.812469 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:03Z","lastTransitionTime":"2026-01-23T16:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.915972 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.916058 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.916082 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.916112 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:03 crc kubenswrapper[4718]: I0123 16:18:03.916132 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:03Z","lastTransitionTime":"2026-01-23T16:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.020319 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.020392 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.020415 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.020445 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.020466 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:04Z","lastTransitionTime":"2026-01-23T16:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.123940 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.124017 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.124034 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.124063 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.124083 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:04Z","lastTransitionTime":"2026-01-23T16:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.143437 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:42:42.447761092 +0000 UTC Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.227613 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.227699 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.227715 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.227741 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.227758 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:04Z","lastTransitionTime":"2026-01-23T16:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.331529 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.331604 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.331659 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.331688 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.331708 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:04Z","lastTransitionTime":"2026-01-23T16:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.435449 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.435521 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.435539 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.435569 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.435589 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:04Z","lastTransitionTime":"2026-01-23T16:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.539067 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.539129 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.539146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.539176 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.539194 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:04Z","lastTransitionTime":"2026-01-23T16:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.642539 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.642598 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.642673 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.642701 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.642721 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:04Z","lastTransitionTime":"2026-01-23T16:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.745373 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.745446 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.745469 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.745499 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.745523 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:04Z","lastTransitionTime":"2026-01-23T16:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.848849 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.848890 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.848901 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.848917 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.848928 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:04Z","lastTransitionTime":"2026-01-23T16:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.955116 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.955252 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.955282 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.955323 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:04 crc kubenswrapper[4718]: I0123 16:18:04.955364 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:04Z","lastTransitionTime":"2026-01-23T16:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.060255 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.060357 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.060375 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.060435 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.060454 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:05Z","lastTransitionTime":"2026-01-23T16:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.140596 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.140700 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.140848 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:05 crc kubenswrapper[4718]: E0123 16:18:05.141532 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:05 crc kubenswrapper[4718]: E0123 16:18:05.141711 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:05 crc kubenswrapper[4718]: E0123 16:18:05.141883 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.141070 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:05 crc kubenswrapper[4718]: E0123 16:18:05.142078 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.144410 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 09:25:58.467075599 +0000 UTC Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.163267 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.163351 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.163371 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.163400 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.163440 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:05Z","lastTransitionTime":"2026-01-23T16:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.266822 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.266876 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.266890 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.266909 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.266922 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:05Z","lastTransitionTime":"2026-01-23T16:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.370486 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.370567 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.370592 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.370663 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.370691 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:05Z","lastTransitionTime":"2026-01-23T16:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.474509 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.475099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.475127 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.475164 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.475194 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:05Z","lastTransitionTime":"2026-01-23T16:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.578609 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.578719 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.578744 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.578834 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.578894 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:05Z","lastTransitionTime":"2026-01-23T16:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.681761 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.681812 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.681823 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.681842 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.681856 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:05Z","lastTransitionTime":"2026-01-23T16:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.784853 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.784937 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.784958 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.784987 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.785007 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:05Z","lastTransitionTime":"2026-01-23T16:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.888821 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.888921 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.888939 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.888971 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.888990 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:05Z","lastTransitionTime":"2026-01-23T16:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.992457 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.992559 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.992612 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.992672 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:05 crc kubenswrapper[4718]: I0123 16:18:05.992694 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:05Z","lastTransitionTime":"2026-01-23T16:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.095975 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.096043 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.096057 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.096080 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.096095 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:06Z","lastTransitionTime":"2026-01-23T16:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.145004 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 16:35:06.016169306 +0000 UTC Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.199243 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.199290 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.199306 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.199333 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.199354 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:06Z","lastTransitionTime":"2026-01-23T16:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.302088 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.302130 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.302142 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.302164 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.302179 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:06Z","lastTransitionTime":"2026-01-23T16:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.405251 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.405294 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.405307 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.405326 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.405341 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:06Z","lastTransitionTime":"2026-01-23T16:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.509082 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.509153 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.509178 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.509210 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.509235 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:06Z","lastTransitionTime":"2026-01-23T16:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.613152 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.613226 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.613247 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.613277 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.613297 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:06Z","lastTransitionTime":"2026-01-23T16:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.716485 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.716563 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.716591 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.716625 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.716686 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:06Z","lastTransitionTime":"2026-01-23T16:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.819932 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.820030 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.820055 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.820089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.820115 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:06Z","lastTransitionTime":"2026-01-23T16:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.923781 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.923908 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.923933 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.923970 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:06 crc kubenswrapper[4718]: I0123 16:18:06.923997 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:06Z","lastTransitionTime":"2026-01-23T16:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.027275 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.027355 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.027374 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.027404 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.027426 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:07Z","lastTransitionTime":"2026-01-23T16:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.131842 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.131949 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.131971 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.131999 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.132019 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:07Z","lastTransitionTime":"2026-01-23T16:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.139863 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:07 crc kubenswrapper[4718]: E0123 16:18:07.140056 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.140685 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.140772 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:07 crc kubenswrapper[4718]: E0123 16:18:07.140954 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.141313 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:07 crc kubenswrapper[4718]: E0123 16:18:07.141749 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:07 crc kubenswrapper[4718]: E0123 16:18:07.141803 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.142221 4718 scope.go:117] "RemoveContainer" containerID="35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32" Jan 23 16:18:07 crc kubenswrapper[4718]: E0123 16:18:07.142572 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.145659 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 09:33:20.095054165 +0000 UTC Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.235591 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.235712 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.235734 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.235764 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.235784 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:07Z","lastTransitionTime":"2026-01-23T16:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.340127 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.340222 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.340247 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.340278 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.340302 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:07Z","lastTransitionTime":"2026-01-23T16:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.442997 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.443064 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.443075 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.443095 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.443107 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:07Z","lastTransitionTime":"2026-01-23T16:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.546268 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.546318 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.546329 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.546348 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.546363 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:07Z","lastTransitionTime":"2026-01-23T16:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.649845 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.649898 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.649909 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.649933 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.649947 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:07Z","lastTransitionTime":"2026-01-23T16:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.753424 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.753500 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.753524 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.753555 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.753577 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:07Z","lastTransitionTime":"2026-01-23T16:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.856656 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.856706 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.856719 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.856740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.856753 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:07Z","lastTransitionTime":"2026-01-23T16:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.960032 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.960114 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.960138 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.960170 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:07 crc kubenswrapper[4718]: I0123 16:18:07.960194 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:07Z","lastTransitionTime":"2026-01-23T16:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.063562 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.063620 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.063663 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.063688 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.063707 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:08Z","lastTransitionTime":"2026-01-23T16:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.146617 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:47:00.746725508 +0000 UTC Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.163104 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.167334 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.167414 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.167447 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.167473 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.167495 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:08Z","lastTransitionTime":"2026-01-23T16:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.270246 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.270306 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.270321 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.270342 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.270357 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:08Z","lastTransitionTime":"2026-01-23T16:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.374234 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.374285 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.374295 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.374315 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.374331 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:08Z","lastTransitionTime":"2026-01-23T16:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.477785 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.477829 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.477839 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.477855 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.477866 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:08Z","lastTransitionTime":"2026-01-23T16:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.581497 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.581560 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.581573 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.581597 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.581612 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:08Z","lastTransitionTime":"2026-01-23T16:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.684902 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.684982 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.685005 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.685036 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.685058 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:08Z","lastTransitionTime":"2026-01-23T16:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.790568 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.790692 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.790712 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.790740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.790758 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:08Z","lastTransitionTime":"2026-01-23T16:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.893675 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.893734 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.893745 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.893765 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.893779 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:08Z","lastTransitionTime":"2026-01-23T16:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.997586 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.997653 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.997663 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.997682 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:08 crc kubenswrapper[4718]: I0123 16:18:08.997692 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:08Z","lastTransitionTime":"2026-01-23T16:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.101891 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.101983 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.102004 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.102032 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.102056 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:09Z","lastTransitionTime":"2026-01-23T16:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.139859 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.139952 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.140018 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:09 crc kubenswrapper[4718]: E0123 16:18:09.140201 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.140320 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:09 crc kubenswrapper[4718]: E0123 16:18:09.140435 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:09 crc kubenswrapper[4718]: E0123 16:18:09.140585 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:09 crc kubenswrapper[4718]: E0123 16:18:09.141331 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.146848 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:10:30.307919803 +0000 UTC Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.163000 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.189916 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:54Z\\\",\\\"message\\\":\\\"ler.go:208] Removed *v1.Node event handler 7\\\\nI0123 16:17:54.474120 6746 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:17:54.474120 6746 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474127 6746 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 16:17:54.474178 6746 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 16:17:54.474164 6746 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474243 6746 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474203 6746 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474378 6746 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.475017 6746 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 16:17:54.475050 6746 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 16:17:54.475081 6746 factory.go:656] Stopping watch factory\\\\nI0123 16:17:54.475095 6746 ovnkube.go:599] Stopped ovnkube\\\\nI0123 16:17:54.475124 6746 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 16:17:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.204800 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.204846 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.204862 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.204884 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.204900 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:09Z","lastTransitionTime":"2026-01-23T16:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.209965 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dppxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"593a4237-c13e-4403-b139-f32b552ca770\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dppxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.228742 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a504457b-f3f5-432d-81f8-0c1dd6499ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b207b699cf31af5be9d845bc3ec1c19a61f4b87788e3dee4314b0a36104c710e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81893e5afcb0a719b0edb43b630ba52d95210ac4b3c57c5c8b3f2a9676a06d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27ff168d50068ac4e86f6b706be71d8eee9bc268b6d2855b366b5cd3eb4740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.248018 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.261972 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.275017 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.293903 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.308012 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.308074 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.308089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.308117 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.308134 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:09Z","lastTransitionTime":"2026-01-23T16:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.324677 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.338698 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.355391 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229d5b-cd4f-4a3b-a7a8-f2737884f68f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0e74fd97b17b8aff6be92a7a3dbf07fd751efb5132967e24568e84ceddbc828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99e4cc6402dc089f764c94a93949ef45f0eae85d5db332fb8bb8a49d2cb27f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b99e4cc6402dc089f764c94a93949ef45f0eae85d5db332fb8bb8a49d2cb27f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.387513 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.409115 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.411431 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.411472 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.411483 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.411504 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.411516 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:09Z","lastTransitionTime":"2026-01-23T16:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.428045 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.446421 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20405d329c7193cd8e7cb3a92d8ead168665e11059b3a658afb9533dad110282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:45Z\\\",\\\"message\\\":\\\"2026-01-23T16:17:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0523c138-83ca-423b-b4d4-f924dc31a168\\\\n2026-01-23T16:17:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0523c138-83ca-423b-b4d4-f924dc31a168 to /host/opt/cni/bin/\\\\n2026-01-23T16:17:00Z [verbose] multus-daemon started\\\\n2026-01-23T16:17:00Z [verbose] Readiness Indicator file check\\\\n2026-01-23T16:17:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.465589 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.484768 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.503509 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.515942 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.516056 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.516106 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.516141 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.516169 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:09Z","lastTransitionTime":"2026-01-23T16:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.524472 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.619844 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.619941 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.619968 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.620009 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.620037 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:09Z","lastTransitionTime":"2026-01-23T16:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.723264 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.723351 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.723370 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.723406 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.723426 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:09Z","lastTransitionTime":"2026-01-23T16:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.826717 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.826774 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.826788 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.826807 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.826820 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:09Z","lastTransitionTime":"2026-01-23T16:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.930051 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.930160 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.930174 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.930201 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:09 crc kubenswrapper[4718]: I0123 16:18:09.930216 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:09Z","lastTransitionTime":"2026-01-23T16:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.033129 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.033220 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.033242 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.033270 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.033289 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:10Z","lastTransitionTime":"2026-01-23T16:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.136753 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.136814 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.136830 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.136855 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.136875 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:10Z","lastTransitionTime":"2026-01-23T16:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.147216 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 01:54:49.359949182 +0000 UTC Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.240110 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.240187 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.240205 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.240235 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.240254 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:10Z","lastTransitionTime":"2026-01-23T16:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.343998 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.344079 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.344096 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.344124 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.344143 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:10Z","lastTransitionTime":"2026-01-23T16:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.448180 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.448249 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.448263 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.448287 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.448306 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:10Z","lastTransitionTime":"2026-01-23T16:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.552982 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.553173 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.553198 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.553227 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.553255 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:10Z","lastTransitionTime":"2026-01-23T16:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.656829 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.656915 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.656939 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.656972 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.656999 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:10Z","lastTransitionTime":"2026-01-23T16:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.761366 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.761450 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.761470 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.761498 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.761524 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:10Z","lastTransitionTime":"2026-01-23T16:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.864735 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.864791 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.864800 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.864853 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.864865 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:10Z","lastTransitionTime":"2026-01-23T16:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.968251 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.968315 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.968335 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.968360 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:10 crc kubenswrapper[4718]: I0123 16:18:10.968377 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:10Z","lastTransitionTime":"2026-01-23T16:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.077014 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.077066 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.077082 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.077105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.077122 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:11Z","lastTransitionTime":"2026-01-23T16:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.139836 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.139904 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.139836 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:11 crc kubenswrapper[4718]: E0123 16:18:11.140050 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.140088 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:11 crc kubenswrapper[4718]: E0123 16:18:11.140193 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:11 crc kubenswrapper[4718]: E0123 16:18:11.140447 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:11 crc kubenswrapper[4718]: E0123 16:18:11.141012 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.147988 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:52:46.52841508 +0000 UTC Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.180546 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.180590 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.180600 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.180617 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.180644 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:11Z","lastTransitionTime":"2026-01-23T16:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.283393 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.283454 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.283473 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.283504 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.283530 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:11Z","lastTransitionTime":"2026-01-23T16:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.387241 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.387340 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.387370 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.387406 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.387435 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:11Z","lastTransitionTime":"2026-01-23T16:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.491463 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.491559 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.491584 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.491615 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.491678 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:11Z","lastTransitionTime":"2026-01-23T16:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.595436 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.595520 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.595542 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.595578 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.595601 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:11Z","lastTransitionTime":"2026-01-23T16:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.699337 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.699406 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.699427 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.699454 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.699474 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:11Z","lastTransitionTime":"2026-01-23T16:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.803215 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.803283 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.803300 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.803331 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.803356 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:11Z","lastTransitionTime":"2026-01-23T16:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.906979 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.907060 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.907081 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.907113 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:11 crc kubenswrapper[4718]: I0123 16:18:11.907133 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:11Z","lastTransitionTime":"2026-01-23T16:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.010901 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.011001 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.011020 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.011050 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.011074 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:12Z","lastTransitionTime":"2026-01-23T16:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.114575 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.114726 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.114751 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.114785 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.114809 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:12Z","lastTransitionTime":"2026-01-23T16:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.148737 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 00:53:48.364246993 +0000 UTC Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.218107 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.218193 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.218222 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.218256 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.218297 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:12Z","lastTransitionTime":"2026-01-23T16:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.321193 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.321270 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.321289 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.321318 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.321338 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:12Z","lastTransitionTime":"2026-01-23T16:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.425191 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.425282 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.425306 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.425334 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.425355 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:12Z","lastTransitionTime":"2026-01-23T16:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.529532 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.529598 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.529614 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.529677 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.529696 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:12Z","lastTransitionTime":"2026-01-23T16:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.633910 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.633985 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.634004 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.634037 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.634055 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:12Z","lastTransitionTime":"2026-01-23T16:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.738444 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.738504 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.738523 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.738549 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.738569 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:12Z","lastTransitionTime":"2026-01-23T16:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.842481 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.842541 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.842561 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.842590 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.842607 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:12Z","lastTransitionTime":"2026-01-23T16:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.946412 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.946488 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.946506 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.946532 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:12 crc kubenswrapper[4718]: I0123 16:18:12.946551 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:12Z","lastTransitionTime":"2026-01-23T16:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.050179 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.050247 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.050266 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.050292 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.050312 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:13Z","lastTransitionTime":"2026-01-23T16:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.139287 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:13 crc kubenswrapper[4718]: E0123 16:18:13.139472 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.139799 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:13 crc kubenswrapper[4718]: E0123 16:18:13.139899 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.140316 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:13 crc kubenswrapper[4718]: E0123 16:18:13.140431 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.140505 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:13 crc kubenswrapper[4718]: E0123 16:18:13.140610 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.148900 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:50:16.002841268 +0000 UTC Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.153898 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.153951 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.153968 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.153992 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.154014 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:13Z","lastTransitionTime":"2026-01-23T16:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.257733 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.257793 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.257810 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.257833 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.257852 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:13Z","lastTransitionTime":"2026-01-23T16:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.361960 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.362035 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.362049 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.362075 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.362092 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:13Z","lastTransitionTime":"2026-01-23T16:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.434276 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.434328 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.434343 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.434364 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.434381 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:13Z","lastTransitionTime":"2026-01-23T16:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:13 crc kubenswrapper[4718]: E0123 16:18:13.455741 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.460492 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.460527 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.460540 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.460564 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.460579 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:13Z","lastTransitionTime":"2026-01-23T16:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:13 crc kubenswrapper[4718]: E0123 16:18:13.482107 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.490842 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.490916 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.490928 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.490950 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.490966 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:13Z","lastTransitionTime":"2026-01-23T16:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:13 crc kubenswrapper[4718]: E0123 16:18:13.517030 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.523417 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.523648 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.523747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.523849 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.523941 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:13Z","lastTransitionTime":"2026-01-23T16:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:13 crc kubenswrapper[4718]: E0123 16:18:13.545338 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.550364 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.550411 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.550422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.550440 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.550453 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:13Z","lastTransitionTime":"2026-01-23T16:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:13 crc kubenswrapper[4718]: E0123 16:18:13.569413 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:13 crc kubenswrapper[4718]: E0123 16:18:13.569716 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.573319 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.573371 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.573391 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.573420 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.573442 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:13Z","lastTransitionTime":"2026-01-23T16:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.676208 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.676280 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.676294 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.676319 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.676340 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:13Z","lastTransitionTime":"2026-01-23T16:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.778955 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.779019 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.779029 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.779051 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.779063 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:13Z","lastTransitionTime":"2026-01-23T16:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.881811 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.881891 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.881916 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.881954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.881980 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:13Z","lastTransitionTime":"2026-01-23T16:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.985441 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.985971 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.986137 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.986289 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:13 crc kubenswrapper[4718]: I0123 16:18:13.986422 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:13Z","lastTransitionTime":"2026-01-23T16:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.090607 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.091163 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.091389 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.091586 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.091804 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:14Z","lastTransitionTime":"2026-01-23T16:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.149967 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 07:58:18.008707176 +0000 UTC Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.195666 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.195739 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.195759 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.195792 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.195814 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:14Z","lastTransitionTime":"2026-01-23T16:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.300121 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.300184 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.300201 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.300227 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.300247 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:14Z","lastTransitionTime":"2026-01-23T16:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.404173 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.404243 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.404266 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.404302 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.404327 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:14Z","lastTransitionTime":"2026-01-23T16:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.507979 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.508071 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.508090 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.508121 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.508143 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:14Z","lastTransitionTime":"2026-01-23T16:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.612040 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.612100 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.612120 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.612152 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.612184 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:14Z","lastTransitionTime":"2026-01-23T16:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.715513 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.715579 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.715597 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.715623 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.715668 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:14Z","lastTransitionTime":"2026-01-23T16:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.819015 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.819082 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.819096 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.819119 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.819133 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:14Z","lastTransitionTime":"2026-01-23T16:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.922474 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.922559 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.922578 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.922611 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:14 crc kubenswrapper[4718]: I0123 16:18:14.922669 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:14Z","lastTransitionTime":"2026-01-23T16:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.025405 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.025445 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.025454 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.025467 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.025476 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:15Z","lastTransitionTime":"2026-01-23T16:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.129584 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.129717 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.129742 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.129774 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.129792 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:15Z","lastTransitionTime":"2026-01-23T16:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.140127 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.140172 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.140235 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.140127 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:15 crc kubenswrapper[4718]: E0123 16:18:15.140366 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:15 crc kubenswrapper[4718]: E0123 16:18:15.140457 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:15 crc kubenswrapper[4718]: E0123 16:18:15.140580 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:15 crc kubenswrapper[4718]: E0123 16:18:15.140796 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.151100 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 11:39:01.151878012 +0000 UTC Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.233580 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.233682 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.233703 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.233737 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.233756 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:15Z","lastTransitionTime":"2026-01-23T16:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.339774 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.339867 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.339890 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.339923 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.339947 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:15Z","lastTransitionTime":"2026-01-23T16:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.444838 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.444935 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.444960 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.444997 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.445024 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:15Z","lastTransitionTime":"2026-01-23T16:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.548417 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.548490 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.548509 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.548536 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.548555 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:15Z","lastTransitionTime":"2026-01-23T16:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.652929 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.653016 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.653038 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.653071 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.653095 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:15Z","lastTransitionTime":"2026-01-23T16:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.756364 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.756478 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.756497 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.756524 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.756541 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:15Z","lastTransitionTime":"2026-01-23T16:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.860285 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.860371 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.860391 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.860419 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.860440 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:15Z","lastTransitionTime":"2026-01-23T16:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.964263 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.964321 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.964340 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.964363 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:15 crc kubenswrapper[4718]: I0123 16:18:15.964381 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:15Z","lastTransitionTime":"2026-01-23T16:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.082266 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.082358 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.082375 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.082402 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.082420 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:16Z","lastTransitionTime":"2026-01-23T16:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.151845 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 15:51:38.062163329 +0000 UTC Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.186175 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.186236 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.186250 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.186274 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.186292 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:16Z","lastTransitionTime":"2026-01-23T16:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.289164 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.289240 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.289266 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.289298 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.289321 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:16Z","lastTransitionTime":"2026-01-23T16:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.392374 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.392455 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.392482 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.392511 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.392529 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:16Z","lastTransitionTime":"2026-01-23T16:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.496048 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.496144 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.496174 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.496213 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.496242 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:16Z","lastTransitionTime":"2026-01-23T16:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.600015 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.600103 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.600130 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.600173 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.600199 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:16Z","lastTransitionTime":"2026-01-23T16:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.704051 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.704124 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.704148 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.704183 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.704208 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:16Z","lastTransitionTime":"2026-01-23T16:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.807471 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.807524 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.807535 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.807559 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.807571 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:16Z","lastTransitionTime":"2026-01-23T16:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.911371 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.911434 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.911452 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.911477 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:16 crc kubenswrapper[4718]: I0123 16:18:16.911497 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:16Z","lastTransitionTime":"2026-01-23T16:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.014951 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.015034 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.015053 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.015093 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.015114 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:17Z","lastTransitionTime":"2026-01-23T16:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.117855 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.117956 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.117982 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.118040 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.118062 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:17Z","lastTransitionTime":"2026-01-23T16:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.139474 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.139656 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.139780 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:17 crc kubenswrapper[4718]: E0123 16:18:17.139767 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.139834 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:17 crc kubenswrapper[4718]: E0123 16:18:17.140039 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:17 crc kubenswrapper[4718]: E0123 16:18:17.140161 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:17 crc kubenswrapper[4718]: E0123 16:18:17.140238 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.152933 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 09:28:44.050146938 +0000 UTC Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.221598 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.221697 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.221717 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.221747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.221770 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:17Z","lastTransitionTime":"2026-01-23T16:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.275319 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs\") pod \"network-metrics-daemon-dppxp\" (UID: \"593a4237-c13e-4403-b139-f32b552ca770\") " pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:17 crc kubenswrapper[4718]: E0123 16:18:17.275672 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:18:17 crc kubenswrapper[4718]: E0123 16:18:17.276146 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs podName:593a4237-c13e-4403-b139-f32b552ca770 nodeName:}" failed. No retries permitted until 2026-01-23 16:19:21.276112728 +0000 UTC m=+162.423354749 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs") pod "network-metrics-daemon-dppxp" (UID: "593a4237-c13e-4403-b139-f32b552ca770") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.325519 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.325588 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.325607 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.325669 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.325689 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:17Z","lastTransitionTime":"2026-01-23T16:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.429134 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.429599 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.429839 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.429990 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.430123 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:17Z","lastTransitionTime":"2026-01-23T16:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.533421 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.533502 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.533525 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.533555 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.533575 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:17Z","lastTransitionTime":"2026-01-23T16:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.637745 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.637815 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.637832 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.637858 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.637878 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:17Z","lastTransitionTime":"2026-01-23T16:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.741257 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.741306 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.741318 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.741343 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.741355 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:17Z","lastTransitionTime":"2026-01-23T16:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.844252 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.844322 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.844342 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.844369 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.844388 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:17Z","lastTransitionTime":"2026-01-23T16:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.948074 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.948142 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.948163 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.948190 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:17 crc kubenswrapper[4718]: I0123 16:18:17.948210 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:17Z","lastTransitionTime":"2026-01-23T16:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.052580 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.052714 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.052754 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.052792 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.052818 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:18Z","lastTransitionTime":"2026-01-23T16:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.153872 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:23:02.355597441 +0000 UTC Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.156746 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.156804 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.156821 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.156851 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.156869 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:18Z","lastTransitionTime":"2026-01-23T16:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.259409 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.259478 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.259491 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.259516 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.259531 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:18Z","lastTransitionTime":"2026-01-23T16:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.363199 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.363311 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.363332 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.363360 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.363383 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:18Z","lastTransitionTime":"2026-01-23T16:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.466178 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.466243 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.466256 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.466284 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.466298 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:18Z","lastTransitionTime":"2026-01-23T16:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.570046 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.570114 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.570134 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.570155 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.570169 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:18Z","lastTransitionTime":"2026-01-23T16:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.674343 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.674405 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.674425 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.674453 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.674473 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:18Z","lastTransitionTime":"2026-01-23T16:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.778269 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.778343 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.778365 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.778391 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.778415 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:18Z","lastTransitionTime":"2026-01-23T16:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.881866 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.881963 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.881998 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.882033 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.882089 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:18Z","lastTransitionTime":"2026-01-23T16:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.986508 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.988322 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.988349 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.988386 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:18 crc kubenswrapper[4718]: I0123 16:18:18.988406 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:18Z","lastTransitionTime":"2026-01-23T16:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.092442 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.092515 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.092533 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.092564 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.092583 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:19Z","lastTransitionTime":"2026-01-23T16:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.139837 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.139935 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.139941 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:19 crc kubenswrapper[4718]: E0123 16:18:19.140051 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.140098 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:19 crc kubenswrapper[4718]: E0123 16:18:19.140242 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:19 crc kubenswrapper[4718]: E0123 16:18:19.140414 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:19 crc kubenswrapper[4718]: E0123 16:18:19.140563 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.154833 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:35:12.218728615 +0000 UTC Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.168284 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"495f14b9-105b-4d67-ba76-4335df89f346\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c71d83838a78a26db9960a767ad45d0d265b95dc57e5bd0cf563b826cc6d463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac13433dd215299d73528710c13faecb8ddca83c8e6a83288a2a0520cd1dd18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1be437697d8109fb05d403da834684e3dd12cdcb3ba9ba752ba42c07ede07f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1913bae7acb265663a5d3d606d3090bac4e06e7f641d4c1df207faab3738154f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9223692dd17cffb784b5b379682cdf44e3454754585daefd01dc6857be88322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2155fc96a4681154672d10b793e93426084ee05f955e871a65e4e0b7c552d5a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79231e2a35e26c64614f75bae4d747df3d3969b82fd27fcceeb71aab5dc05f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:17:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw496\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x7cc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.186882 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk97t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6ffd18-7477-43f8-878a-2cc5849bc796\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a267cdc8fabf2552632bd564d28b3e9f9b72c49210af39dc4a4d79792d21e3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk97t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.195908 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.195980 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.195992 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.196014 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.196026 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:19Z","lastTransitionTime":"2026-01-23T16:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.203510 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229d5b-cd4f-4a3b-a7a8-f2737884f68f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0e74fd97b17b8aff6be92a7a3dbf07fd751efb5132967e24568e84ceddbc828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99e4cc6402dc089f764c94a93949ef45f0eae85d5db332fb8bb8a49d2cb27f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b99e4cc6402dc089f764c94a93949ef45f0eae85d5db332fb8bb8a49d2cb27f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.250683 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf05b93-b69e-451c-8098-ad2a122d9ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7d38a50a998620c2b66b9024a87eb8a1e87c581a92bccbe8de7416e1b5c449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e1b8154d5ef186f3ad206ba7f3835086d6f995147d6a81448449230cbd6d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b26e8e341c3ee53c68b0b5a12b3eeaea9bc601636773249b8c5ae47d0cf8fec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f056ce87ef50b0035365194b2b6798ea1d28c9917a7b9534587af9027c266b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d88fbb1f67de2e3c7f06248431a7838222c0c51fab9b40c72ae707668985f0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a82ba53058156b51c629941deaff14d8b3e48ecf6df29994e849e7811d22e8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aff23d142b364bb5ddf7e6cb57cf1f1f30e063b3e1482679d00cbf720da2993d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fa3647056a1b5971ef0cfdaa0b1ad6f331c849927f6d7e4e5a9dcae9d610e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.275802 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15a4e5cd-08da-45a5-a501-88d7b86682f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:16:51.714130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:16:51.715676 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-731012038/tls.crt::/tmp/serving-cert-731012038/tls.key\\\\\\\"\\\\nI0123 16:16:57.092668 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:16:57.095719 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:16:57.095740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:16:57.095830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:16:57.095842 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:16:57.103961 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 16:16:57.103995 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.103999 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:16:57.104004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:16:57.104008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:16:57.104011 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:16:57.104014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 16:16:57.104247 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 16:16:57.115919 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.294083 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7628fdaa58c97995392328ef2f7db3748288bcf42a51dcf38cc4b7688530cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.298901 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.298956 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.298978 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.299007 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.299024 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:19Z","lastTransitionTime":"2026-01-23T16:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.314513 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.332585 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c6093a-e4a7-4596-9c00-1e0ad6dfae04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a6ed274fdb2ebdf6074638b91d70587b780ede52f5ce1a1312b2c34a63cb6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0143eb451da5fd0bb1fb6425f03bf93906fdc14c6bc0406b92967bc01b60b089\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32c7463e09b05d34d80469e3239c2581297f5b91ce4f80b097b1d39b562a4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.352575 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.403318 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.403366 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.403380 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.403400 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.403412 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:19Z","lastTransitionTime":"2026-01-23T16:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.404851 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.425547 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7190c43b1ab5490d88570d0aca01e9a3317a5ad881978cf32f246fb5933a338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.445501 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tb79v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a07769-1921-4484-b1cd-28b23487bb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20405d329c7193cd8e7cb3a92d8ead168665e11059b3a658afb9533dad110282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:45Z\\\",\\\"message\\\":\\\"2026-01-23T16:17:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0523c138-83ca-423b-b4d4-f924dc31a168\\\\n2026-01-23T16:17:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0523c138-83ca-423b-b4d4-f924dc31a168 to /host/opt/cni/bin/\\\\n2026-01-23T16:17:00Z [verbose] multus-daemon started\\\\n2026-01-23T16:17:00Z [verbose] Readiness Indicator file check\\\\n2026-01-23T16:17:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlh4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tb79v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.460855 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e270db3-22b5-4b86-ad17-d6804c7f2d00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecacee08f384d174fd44b65b8dd0f427a1f9647a91170694e11f3dea1db339e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbsl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.483543 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4985ab62-43a5-4fd8-919c-f9db2eea18f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:17:54Z\\\",\\\"message\\\":\\\"ler.go:208] Removed *v1.Node event handler 7\\\\nI0123 16:17:54.474120 6746 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:17:54.474120 6746 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474127 6746 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 16:17:54.474178 6746 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 16:17:54.474164 6746 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474243 6746 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474203 6746 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.474378 6746 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:17:54.475017 6746 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 16:17:54.475050 6746 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 16:17:54.475081 6746 factory.go:656] Stopping watch factory\\\\nI0123 16:17:54.475095 6746 ovnkube.go:599] Stopped ovnkube\\\\nI0123 16:17:54.475124 6746 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 16:17:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:17:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsnx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qnds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.498472 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dppxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"593a4237-c13e-4403-b139-f32b552ca770\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7xhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dppxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.506212 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.506267 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.506280 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.506301 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.506312 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:19Z","lastTransitionTime":"2026-01-23T16:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.517190 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a504457b-f3f5-432d-81f8-0c1dd6499ec3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b207b699cf31af5be9d845bc3ec1c19a61f4b87788e3dee4314b0a36104c710e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81893e5afcb0a719b0edb43b630ba52d95210ac4b3c57c5c8b3f2a9676a06d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27ff168d50068ac4e86f6b706be71d8eee9bc268b6d2855b366b5cd3eb4740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456c5f65bc7c17699b15056adb88518732c8348ff2f4fd25591bfc4571ca2d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:16:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:16:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.545798 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b9ce0f107f402d727770f98e81d1a81627575b41fd48b1e30ac79f6aa8e478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.569353 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48ad62dc-feb1-4fb1-989b-7830ef9061c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9678c71880ff4883f69d4899310e8c6ee0505ca7efb726312287e8280f7ac806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qhnq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:16:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sf9rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.608759 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.608827 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.608846 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.608874 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.608896 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:19Z","lastTransitionTime":"2026-01-23T16:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.627008 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4202bddb-b096-4dfc-b808-a6874059803c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e98fc20c44caa721320753aa5df3e37bee1df17672395ec01de952f123bab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a37ff313ac396adf2b4d52c1535661966712b043005f9e39fcd6b7e6ce46b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv6dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qn56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.712098 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.712145 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.712155 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.712171 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.712183 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:19Z","lastTransitionTime":"2026-01-23T16:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.815182 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.815229 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.815239 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.815257 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.815269 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:19Z","lastTransitionTime":"2026-01-23T16:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.917680 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.917762 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.917822 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.917852 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:19 crc kubenswrapper[4718]: I0123 16:18:19.917871 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:19Z","lastTransitionTime":"2026-01-23T16:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.021032 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.021078 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.021090 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.021110 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.021123 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:20Z","lastTransitionTime":"2026-01-23T16:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.126462 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.126528 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.126542 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.127207 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.127238 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:20Z","lastTransitionTime":"2026-01-23T16:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.155370 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:13:50.224370454 +0000 UTC Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.230870 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.230933 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.230948 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.230973 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.230994 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:20Z","lastTransitionTime":"2026-01-23T16:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.334878 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.334973 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.335006 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.335040 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.335065 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:20Z","lastTransitionTime":"2026-01-23T16:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.438945 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.439016 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.439034 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.439066 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.439088 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:20Z","lastTransitionTime":"2026-01-23T16:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.543327 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.543409 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.543429 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.543459 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.543482 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:20Z","lastTransitionTime":"2026-01-23T16:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.647071 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.647136 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.647148 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.647168 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.647179 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:20Z","lastTransitionTime":"2026-01-23T16:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.751557 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.751715 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.751743 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.751774 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.751797 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:20Z","lastTransitionTime":"2026-01-23T16:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.854845 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.854931 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.854956 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.854996 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.855022 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:20Z","lastTransitionTime":"2026-01-23T16:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.958943 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.958992 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.959005 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.959025 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:20 crc kubenswrapper[4718]: I0123 16:18:20.959040 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:20Z","lastTransitionTime":"2026-01-23T16:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.067658 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.067736 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.067754 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.067782 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.067800 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:21Z","lastTransitionTime":"2026-01-23T16:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.139749 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.140036 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.140128 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.140493 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:21 crc kubenswrapper[4718]: E0123 16:18:21.140617 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:21 crc kubenswrapper[4718]: E0123 16:18:21.140810 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:21 crc kubenswrapper[4718]: E0123 16:18:21.141022 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:21 crc kubenswrapper[4718]: E0123 16:18:21.141131 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.141144 4718 scope.go:117] "RemoveContainer" containerID="35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32" Jan 23 16:18:21 crc kubenswrapper[4718]: E0123 16:18:21.141472 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5qnds_openshift-ovn-kubernetes(4985ab62-43a5-4fd8-919c-f9db2eea18f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.156503 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:02:44.290953725 +0000 UTC Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.170351 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.170397 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.170409 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.170423 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.170437 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:21Z","lastTransitionTime":"2026-01-23T16:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.274010 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.274075 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.274089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.274110 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.274126 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:21Z","lastTransitionTime":"2026-01-23T16:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.377937 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.378019 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.378038 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.378070 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.378093 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:21Z","lastTransitionTime":"2026-01-23T16:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.481367 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.481439 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.481457 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.481485 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.481510 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:21Z","lastTransitionTime":"2026-01-23T16:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.584801 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.584882 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.584904 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.584934 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.584955 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:21Z","lastTransitionTime":"2026-01-23T16:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.688385 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.688474 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.688492 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.688516 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.688536 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:21Z","lastTransitionTime":"2026-01-23T16:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.793044 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.793131 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.793153 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.793185 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.793205 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:21Z","lastTransitionTime":"2026-01-23T16:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.902389 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.902449 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.902465 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.902488 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:21 crc kubenswrapper[4718]: I0123 16:18:21.902504 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:21Z","lastTransitionTime":"2026-01-23T16:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.006196 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.006278 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.006295 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.006322 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.006339 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:22Z","lastTransitionTime":"2026-01-23T16:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.108930 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.108994 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.109011 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.109034 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.109052 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:22Z","lastTransitionTime":"2026-01-23T16:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.157350 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 11:11:51.868422229 +0000 UTC Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.212115 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.212190 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.212227 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.212265 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.212290 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:22Z","lastTransitionTime":"2026-01-23T16:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.314943 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.315020 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.315043 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.315072 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.315093 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:22Z","lastTransitionTime":"2026-01-23T16:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.419150 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.419277 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.419339 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.419370 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.419390 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:22Z","lastTransitionTime":"2026-01-23T16:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.523196 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.523244 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.523259 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.523275 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.523285 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:22Z","lastTransitionTime":"2026-01-23T16:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.626325 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.626363 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.626374 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.626387 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.626395 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:22Z","lastTransitionTime":"2026-01-23T16:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.729460 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.729533 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.729556 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.729585 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.729604 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:22Z","lastTransitionTime":"2026-01-23T16:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.832170 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.832213 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.832224 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.832241 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.832253 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:22Z","lastTransitionTime":"2026-01-23T16:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.934833 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.934867 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.934876 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.934890 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:22 crc kubenswrapper[4718]: I0123 16:18:22.934898 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:22Z","lastTransitionTime":"2026-01-23T16:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.037483 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.037570 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.037592 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.037622 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.037684 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:23Z","lastTransitionTime":"2026-01-23T16:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.139405 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.139510 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.139577 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.139614 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:23 crc kubenswrapper[4718]: E0123 16:18:23.139849 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:23 crc kubenswrapper[4718]: E0123 16:18:23.140003 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:23 crc kubenswrapper[4718]: E0123 16:18:23.140189 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:23 crc kubenswrapper[4718]: E0123 16:18:23.140386 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.141347 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.141400 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.141420 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.141442 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.141460 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:23Z","lastTransitionTime":"2026-01-23T16:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.157705 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:49:09.984562697 +0000 UTC Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.244089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.244154 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.244176 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.244208 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.244229 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:23Z","lastTransitionTime":"2026-01-23T16:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.347380 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.347893 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.347926 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.347952 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.347994 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:23Z","lastTransitionTime":"2026-01-23T16:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.450879 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.450948 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.450971 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.450998 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.451019 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:23Z","lastTransitionTime":"2026-01-23T16:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.554103 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.554146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.554157 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.554175 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.554188 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:23Z","lastTransitionTime":"2026-01-23T16:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.657387 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.657452 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.657470 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.657498 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.657520 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:23Z","lastTransitionTime":"2026-01-23T16:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.760360 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.760424 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.760445 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.760472 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.760492 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:23Z","lastTransitionTime":"2026-01-23T16:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.850442 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.850490 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.850501 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.850518 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.850531 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:23Z","lastTransitionTime":"2026-01-23T16:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:23 crc kubenswrapper[4718]: E0123 16:18:23.867882 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.873274 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.873332 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.873352 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.873377 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.873393 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:23Z","lastTransitionTime":"2026-01-23T16:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:23 crc kubenswrapper[4718]: E0123 16:18:23.892839 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.899005 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.899067 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.899086 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.899111 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.899128 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:23Z","lastTransitionTime":"2026-01-23T16:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:23 crc kubenswrapper[4718]: E0123 16:18:23.921190 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.926340 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.926392 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.926409 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.926434 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.926454 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:23Z","lastTransitionTime":"2026-01-23T16:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:23 crc kubenswrapper[4718]: E0123 16:18:23.948845 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.954951 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.955038 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.955060 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.955089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.955110 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:23Z","lastTransitionTime":"2026-01-23T16:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:23 crc kubenswrapper[4718]: E0123 16:18:23.974298 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:18:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b4c39b8c-400d-464e-b232-a4d4bf4271ad\\\",\\\"systemUUID\\\":\\\"765cfa9d-30f3-4d97-8bd5-593f268463db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:18:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:18:23 crc kubenswrapper[4718]: E0123 16:18:23.974893 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.981700 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.981754 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.981767 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.981788 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:23 crc kubenswrapper[4718]: I0123 16:18:23.981800 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:23Z","lastTransitionTime":"2026-01-23T16:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.084932 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.085004 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.085029 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.085072 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.085089 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:24Z","lastTransitionTime":"2026-01-23T16:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.158801 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 06:56:39.095614538 +0000 UTC Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.188258 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.188316 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.188333 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.188358 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.188375 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:24Z","lastTransitionTime":"2026-01-23T16:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.291594 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.291740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.291754 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.291774 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.291786 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:24Z","lastTransitionTime":"2026-01-23T16:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.396725 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.396776 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.396788 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.396809 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.396827 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:24Z","lastTransitionTime":"2026-01-23T16:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.500158 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.500206 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.500218 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.500239 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.500251 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:24Z","lastTransitionTime":"2026-01-23T16:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.603091 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.603131 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.603146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.603165 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.603178 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:24Z","lastTransitionTime":"2026-01-23T16:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.706397 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.706457 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.706468 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.706486 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.706498 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:24Z","lastTransitionTime":"2026-01-23T16:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.809085 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.809148 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.809212 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.809239 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.809258 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:24Z","lastTransitionTime":"2026-01-23T16:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.912579 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.912670 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.912684 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.912705 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:24 crc kubenswrapper[4718]: I0123 16:18:24.912718 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:24Z","lastTransitionTime":"2026-01-23T16:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.014990 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.015044 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.015056 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.015076 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.015088 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:25Z","lastTransitionTime":"2026-01-23T16:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.118246 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.118319 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.118340 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.118366 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.118387 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:25Z","lastTransitionTime":"2026-01-23T16:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.140046 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.140076 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.140076 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:25 crc kubenswrapper[4718]: E0123 16:18:25.140243 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:25 crc kubenswrapper[4718]: E0123 16:18:25.140363 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.140398 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:25 crc kubenswrapper[4718]: E0123 16:18:25.140471 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:25 crc kubenswrapper[4718]: E0123 16:18:25.140564 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.159549 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 04:19:39.569804531 +0000 UTC Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.221322 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.221436 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.221457 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.221483 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.221501 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:25Z","lastTransitionTime":"2026-01-23T16:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.324125 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.324182 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.324194 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.324213 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.324227 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:25Z","lastTransitionTime":"2026-01-23T16:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.427808 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.427869 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.427886 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.427910 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.427932 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:25Z","lastTransitionTime":"2026-01-23T16:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.530083 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.530159 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.530177 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.530202 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.530223 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:25Z","lastTransitionTime":"2026-01-23T16:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.633118 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.633162 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.633175 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.633193 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.633205 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:25Z","lastTransitionTime":"2026-01-23T16:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.735241 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.735289 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.735298 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.735313 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.735323 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:25Z","lastTransitionTime":"2026-01-23T16:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.838151 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.838221 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.838240 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.838267 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.838286 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:25Z","lastTransitionTime":"2026-01-23T16:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.942304 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.942366 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.942390 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.942425 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:25 crc kubenswrapper[4718]: I0123 16:18:25.942447 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:25Z","lastTransitionTime":"2026-01-23T16:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.045434 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.045487 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.045497 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.045514 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.045524 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:26Z","lastTransitionTime":"2026-01-23T16:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.148382 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.148450 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.148468 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.148490 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.148509 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:26Z","lastTransitionTime":"2026-01-23T16:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.159705 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:43:03.47411332 +0000 UTC Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.251861 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.251915 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.251933 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.251955 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.251969 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:26Z","lastTransitionTime":"2026-01-23T16:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.356072 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.356133 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.356154 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.356183 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.356203 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:26Z","lastTransitionTime":"2026-01-23T16:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.458878 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.458923 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.458935 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.458953 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.458964 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:26Z","lastTransitionTime":"2026-01-23T16:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.562700 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.562776 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.562799 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.562829 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.562850 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:26Z","lastTransitionTime":"2026-01-23T16:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.667279 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.667345 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.667363 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.667391 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.667408 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:26Z","lastTransitionTime":"2026-01-23T16:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.770090 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.770156 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.770179 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.770237 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.770260 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:26Z","lastTransitionTime":"2026-01-23T16:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.873482 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.873545 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.873562 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.873587 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.873607 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:26Z","lastTransitionTime":"2026-01-23T16:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.976931 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.976995 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.977014 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.977039 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:26 crc kubenswrapper[4718]: I0123 16:18:26.977057 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:26Z","lastTransitionTime":"2026-01-23T16:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.080591 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.080685 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.080713 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.080738 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.080755 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:27Z","lastTransitionTime":"2026-01-23T16:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.142877 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.142930 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.142898 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.143001 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:27 crc kubenswrapper[4718]: E0123 16:18:27.143133 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:27 crc kubenswrapper[4718]: E0123 16:18:27.143269 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:27 crc kubenswrapper[4718]: E0123 16:18:27.143390 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:27 crc kubenswrapper[4718]: E0123 16:18:27.143509 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.159791 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:20:39.546108353 +0000 UTC Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.184924 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.184978 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.184996 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.185022 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.185040 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:27Z","lastTransitionTime":"2026-01-23T16:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.289382 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.289448 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.289466 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.289495 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.289517 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:27Z","lastTransitionTime":"2026-01-23T16:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.393250 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.393303 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.393316 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.393343 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.393357 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:27Z","lastTransitionTime":"2026-01-23T16:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.496423 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.496492 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.496509 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.496535 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.496557 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:27Z","lastTransitionTime":"2026-01-23T16:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.599781 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.599874 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.599908 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.599937 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.599960 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:27Z","lastTransitionTime":"2026-01-23T16:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.702748 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.702837 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.702866 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.702895 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.702921 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:27Z","lastTransitionTime":"2026-01-23T16:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.806095 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.806163 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.806180 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.806207 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.806227 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:27Z","lastTransitionTime":"2026-01-23T16:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.909502 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.909567 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.909585 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.909611 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:27 crc kubenswrapper[4718]: I0123 16:18:27.909656 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:27Z","lastTransitionTime":"2026-01-23T16:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.013030 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.013182 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.013218 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.013250 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.013271 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:28Z","lastTransitionTime":"2026-01-23T16:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.116070 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.116183 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.116210 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.116250 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.116278 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:28Z","lastTransitionTime":"2026-01-23T16:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.160264 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:17:33.872180895 +0000 UTC Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.219503 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.219588 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.219609 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.219668 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.219695 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:28Z","lastTransitionTime":"2026-01-23T16:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.322290 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.322323 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.322333 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.322349 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.322359 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:28Z","lastTransitionTime":"2026-01-23T16:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.424919 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.424964 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.424972 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.424988 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.424997 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:28Z","lastTransitionTime":"2026-01-23T16:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.528615 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.528759 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.528785 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.528819 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.528844 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:28Z","lastTransitionTime":"2026-01-23T16:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.633280 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.633345 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.633365 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.633394 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.633414 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:28Z","lastTransitionTime":"2026-01-23T16:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.736902 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.736963 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.736986 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.737018 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.737039 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:28Z","lastTransitionTime":"2026-01-23T16:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.840320 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.840406 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.840427 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.840455 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.840524 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:28Z","lastTransitionTime":"2026-01-23T16:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.944806 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.944902 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.944921 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.944950 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:28 crc kubenswrapper[4718]: I0123 16:18:28.944973 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:28Z","lastTransitionTime":"2026-01-23T16:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.048456 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.048525 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.048542 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.048562 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.048583 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:29Z","lastTransitionTime":"2026-01-23T16:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.139964 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.140043 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.140150 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:29 crc kubenswrapper[4718]: E0123 16:18:29.140943 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.141004 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:29 crc kubenswrapper[4718]: E0123 16:18:29.141346 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:29 crc kubenswrapper[4718]: E0123 16:18:29.141485 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:29 crc kubenswrapper[4718]: E0123 16:18:29.141710 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.151011 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.151066 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.151083 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.151108 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.151124 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:29Z","lastTransitionTime":"2026-01-23T16:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.161226 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 14:31:19.106485733 +0000 UTC Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.170888 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bjfr4" podStartSLOduration=91.170865877 podStartE2EDuration="1m31.170865877s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:18:29.169973822 +0000 UTC m=+110.317215863" watchObservedRunningTime="2026-01-23 16:18:29.170865877 +0000 UTC m=+110.318107878" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.254882 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.254977 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.254991 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.255833 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.255926 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:29Z","lastTransitionTime":"2026-01-23T16:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.265314 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=56.265285534 podStartE2EDuration="56.265285534s" podCreationTimestamp="2026-01-23 16:17:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:18:29.265045607 +0000 UTC m=+110.412287608" watchObservedRunningTime="2026-01-23 16:18:29.265285534 +0000 UTC m=+110.412527535" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.313070 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qn56b" podStartSLOduration=90.313051272 podStartE2EDuration="1m30.313051272s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:18:29.312513358 +0000 UTC m=+110.459755359" watchObservedRunningTime="2026-01-23 16:18:29.313051272 +0000 UTC m=+110.460293273" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.313205 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podStartSLOduration=91.313200277 podStartE2EDuration="1m31.313200277s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:18:29.297084599 +0000 UTC m=+110.444326610" watchObservedRunningTime="2026-01-23 16:18:29.313200277 +0000 UTC m=+110.460442278" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.331921 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.331893885 podStartE2EDuration="21.331893885s" podCreationTimestamp="2026-01-23 16:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:18:29.331407012 +0000 UTC m=+110.478649013" watchObservedRunningTime="2026-01-23 16:18:29.331893885 +0000 UTC m=+110.479135886" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.358907 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.358948 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.358959 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.358975 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.358984 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:29Z","lastTransitionTime":"2026-01-23T16:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.363751 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=91.363730341 podStartE2EDuration="1m31.363730341s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:18:29.361795139 +0000 UTC m=+110.509037130" watchObservedRunningTime="2026-01-23 16:18:29.363730341 +0000 UTC m=+110.510972332" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.377189 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=92.377165546 podStartE2EDuration="1m32.377165546s" podCreationTimestamp="2026-01-23 16:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:18:29.376847917 +0000 UTC m=+110.524089908" watchObservedRunningTime="2026-01-23 16:18:29.377165546 +0000 UTC m=+110.524407537" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.420149 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-x7cc9" podStartSLOduration=91.420122245 podStartE2EDuration="1m31.420122245s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:18:29.419823877 +0000 UTC m=+110.567065878" watchObservedRunningTime="2026-01-23 16:18:29.420122245 +0000 UTC m=+110.567364236" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.432028 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-jk97t" podStartSLOduration=91.432009828 podStartE2EDuration="1m31.432009828s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:18:29.431130044 +0000 UTC m=+110.578372035" watchObservedRunningTime="2026-01-23 16:18:29.432009828 +0000 UTC m=+110.579251819" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.448811 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=86.448791054 podStartE2EDuration="1m26.448791054s" podCreationTimestamp="2026-01-23 16:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:18:29.448487686 +0000 UTC m=+110.595729677" watchObservedRunningTime="2026-01-23 16:18:29.448791054 +0000 UTC m=+110.596033045" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.461965 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.461994 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.462002 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.462015 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.462023 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:29Z","lastTransitionTime":"2026-01-23T16:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.500125 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-tb79v" podStartSLOduration=91.500102329 podStartE2EDuration="1m31.500102329s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:18:29.498851676 +0000 UTC m=+110.646093707" watchObservedRunningTime="2026-01-23 16:18:29.500102329 +0000 UTC m=+110.647344330" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.564830 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.564890 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.564902 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.564920 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.564932 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:29Z","lastTransitionTime":"2026-01-23T16:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.667231 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.667272 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.667281 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.667299 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.667309 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:29Z","lastTransitionTime":"2026-01-23T16:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.769703 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.769764 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.769773 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.769789 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.769799 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:29Z","lastTransitionTime":"2026-01-23T16:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.872512 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.872570 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.872587 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.872612 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.872664 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:29Z","lastTransitionTime":"2026-01-23T16:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.975863 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.975953 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.975974 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.976000 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:29 crc kubenswrapper[4718]: I0123 16:18:29.976020 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:29Z","lastTransitionTime":"2026-01-23T16:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.078566 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.078625 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.078672 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.078698 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.078717 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:30Z","lastTransitionTime":"2026-01-23T16:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.162183 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:31:19.303129723 +0000 UTC Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.182094 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.182151 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.182170 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.182197 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.182217 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:30Z","lastTransitionTime":"2026-01-23T16:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.285769 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.286262 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.286415 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.286574 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.286759 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:30Z","lastTransitionTime":"2026-01-23T16:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.389838 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.389922 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.389947 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.389982 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.390006 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:30Z","lastTransitionTime":"2026-01-23T16:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.493554 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.494089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.494234 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.494394 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.494535 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:30Z","lastTransitionTime":"2026-01-23T16:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.597523 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.597901 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.598053 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.598244 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.598403 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:30Z","lastTransitionTime":"2026-01-23T16:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.702335 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.702410 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.702433 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.702461 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.702481 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:30Z","lastTransitionTime":"2026-01-23T16:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.806398 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.806466 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.806488 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.806521 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.806541 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:30Z","lastTransitionTime":"2026-01-23T16:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.910219 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.910284 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.910308 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.910336 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:30 crc kubenswrapper[4718]: I0123 16:18:30.910358 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:30Z","lastTransitionTime":"2026-01-23T16:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.013396 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.013445 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.013458 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.013478 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.013491 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:31Z","lastTransitionTime":"2026-01-23T16:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.116307 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.116355 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.116367 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.116388 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.116402 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:31Z","lastTransitionTime":"2026-01-23T16:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.139250 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.139312 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:31 crc kubenswrapper[4718]: E0123 16:18:31.139401 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:31 crc kubenswrapper[4718]: E0123 16:18:31.139525 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.139257 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.139582 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:31 crc kubenswrapper[4718]: E0123 16:18:31.139619 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:31 crc kubenswrapper[4718]: E0123 16:18:31.139718 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.162691 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:51:43.230558422 +0000 UTC Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.219235 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.219274 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.219284 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.219304 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.219313 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:31Z","lastTransitionTime":"2026-01-23T16:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.322766 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.322825 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.322844 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.322869 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.322889 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:31Z","lastTransitionTime":"2026-01-23T16:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.430878 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.430924 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.430935 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.430950 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.430963 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:31Z","lastTransitionTime":"2026-01-23T16:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.534775 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.534857 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.534876 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.534906 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.534925 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:31Z","lastTransitionTime":"2026-01-23T16:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.638535 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.638686 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.638716 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.638799 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.638826 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:31Z","lastTransitionTime":"2026-01-23T16:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.742593 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.742767 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.742791 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.742956 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.742977 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:31Z","lastTransitionTime":"2026-01-23T16:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.846525 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.846604 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.846622 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.847269 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.847370 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:31Z","lastTransitionTime":"2026-01-23T16:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.848191 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tb79v_d2a07769-1921-4484-b1cd-28b23487bb39/kube-multus/1.log" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.848841 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tb79v_d2a07769-1921-4484-b1cd-28b23487bb39/kube-multus/0.log" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.848937 4718 generic.go:334] "Generic (PLEG): container finished" podID="d2a07769-1921-4484-b1cd-28b23487bb39" containerID="20405d329c7193cd8e7cb3a92d8ead168665e11059b3a658afb9533dad110282" exitCode=1 Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.848976 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tb79v" event={"ID":"d2a07769-1921-4484-b1cd-28b23487bb39","Type":"ContainerDied","Data":"20405d329c7193cd8e7cb3a92d8ead168665e11059b3a658afb9533dad110282"} Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.849020 4718 scope.go:117] "RemoveContainer" containerID="7547b27b8a66a48cd8dc85150f7d15c9753322a683abf443d6df97186fa0f89c" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.849586 4718 scope.go:117] "RemoveContainer" containerID="20405d329c7193cd8e7cb3a92d8ead168665e11059b3a658afb9533dad110282" Jan 23 16:18:31 crc kubenswrapper[4718]: E0123 16:18:31.849889 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-tb79v_openshift-multus(d2a07769-1921-4484-b1cd-28b23487bb39)\"" pod="openshift-multus/multus-tb79v" podUID="d2a07769-1921-4484-b1cd-28b23487bb39" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.950099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.950165 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.950182 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.950206 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:31 crc kubenswrapper[4718]: I0123 16:18:31.950225 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:31Z","lastTransitionTime":"2026-01-23T16:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.053548 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.053645 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.053659 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.053679 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.053693 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:32Z","lastTransitionTime":"2026-01-23T16:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.157019 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.157726 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.157939 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.158114 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.158261 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:32Z","lastTransitionTime":"2026-01-23T16:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.163175 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 14:02:26.328819935 +0000 UTC Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.261039 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.261359 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.261450 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.261519 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.261598 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:32Z","lastTransitionTime":"2026-01-23T16:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.364897 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.364992 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.365019 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.365051 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.365075 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:32Z","lastTransitionTime":"2026-01-23T16:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.467929 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.467973 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.467986 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.468003 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.468015 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:32Z","lastTransitionTime":"2026-01-23T16:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.570143 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.570176 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.570185 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.570197 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.570207 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:32Z","lastTransitionTime":"2026-01-23T16:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.673596 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.673711 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.673735 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.673762 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.673782 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:32Z","lastTransitionTime":"2026-01-23T16:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.776056 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.776382 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.776477 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.776601 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.776744 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:32Z","lastTransitionTime":"2026-01-23T16:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.855401 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tb79v_d2a07769-1921-4484-b1cd-28b23487bb39/kube-multus/1.log" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.879850 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.879897 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.879908 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.879929 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.879945 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:32Z","lastTransitionTime":"2026-01-23T16:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.982501 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.982894 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.982980 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.983078 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:32 crc kubenswrapper[4718]: I0123 16:18:32.983233 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:32Z","lastTransitionTime":"2026-01-23T16:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.085747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.085794 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.085806 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.085824 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.085835 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:33Z","lastTransitionTime":"2026-01-23T16:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.139842 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:33 crc kubenswrapper[4718]: E0123 16:18:33.140295 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.140844 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:33 crc kubenswrapper[4718]: E0123 16:18:33.141157 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.141557 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:33 crc kubenswrapper[4718]: E0123 16:18:33.141896 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.142508 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:33 crc kubenswrapper[4718]: E0123 16:18:33.142787 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.163485 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:58:35.374166235 +0000 UTC Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.188199 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.188270 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.188287 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.188313 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.188333 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:33Z","lastTransitionTime":"2026-01-23T16:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.291587 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.291711 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.291732 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.291758 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.291776 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:33Z","lastTransitionTime":"2026-01-23T16:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.394523 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.394575 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.394591 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.394614 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.394668 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:33Z","lastTransitionTime":"2026-01-23T16:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.498024 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.498117 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.498148 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.498183 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.498207 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:33Z","lastTransitionTime":"2026-01-23T16:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.601797 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.601896 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.601919 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.601948 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.601970 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:33Z","lastTransitionTime":"2026-01-23T16:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.704688 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.704749 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.704768 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.704791 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.704809 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:33Z","lastTransitionTime":"2026-01-23T16:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.808816 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.808862 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.808874 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.808889 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.808902 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:33Z","lastTransitionTime":"2026-01-23T16:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.911685 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.911735 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.911748 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.911765 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:33 crc kubenswrapper[4718]: I0123 16:18:33.911778 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:33Z","lastTransitionTime":"2026-01-23T16:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.014281 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.014347 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.014366 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.014390 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.014407 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:34Z","lastTransitionTime":"2026-01-23T16:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.117107 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.117248 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.117265 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.117287 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.117304 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:34Z","lastTransitionTime":"2026-01-23T16:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.164135 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 02:03:29.896488463 +0000 UTC Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.220135 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.220213 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.220239 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.220269 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.220295 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:34Z","lastTransitionTime":"2026-01-23T16:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.260361 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.260424 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.260441 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.260468 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.260487 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:18:34Z","lastTransitionTime":"2026-01-23T16:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.333234 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97"] Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.333792 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.336018 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.336476 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.336790 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.339270 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.377618 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a17fa5e6-92c3-4374-83e5-ec91b9c05ed2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hfv97\" (UID: \"a17fa5e6-92c3-4374-83e5-ec91b9c05ed2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.377744 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a17fa5e6-92c3-4374-83e5-ec91b9c05ed2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hfv97\" (UID: \"a17fa5e6-92c3-4374-83e5-ec91b9c05ed2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.377801 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a17fa5e6-92c3-4374-83e5-ec91b9c05ed2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hfv97\" (UID: \"a17fa5e6-92c3-4374-83e5-ec91b9c05ed2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.377829 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a17fa5e6-92c3-4374-83e5-ec91b9c05ed2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hfv97\" (UID: \"a17fa5e6-92c3-4374-83e5-ec91b9c05ed2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.377852 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a17fa5e6-92c3-4374-83e5-ec91b9c05ed2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hfv97\" (UID: \"a17fa5e6-92c3-4374-83e5-ec91b9c05ed2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.479044 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a17fa5e6-92c3-4374-83e5-ec91b9c05ed2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hfv97\" (UID: \"a17fa5e6-92c3-4374-83e5-ec91b9c05ed2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.479108 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a17fa5e6-92c3-4374-83e5-ec91b9c05ed2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hfv97\" (UID: \"a17fa5e6-92c3-4374-83e5-ec91b9c05ed2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.479165 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a17fa5e6-92c3-4374-83e5-ec91b9c05ed2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hfv97\" (UID: \"a17fa5e6-92c3-4374-83e5-ec91b9c05ed2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.479231 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a17fa5e6-92c3-4374-83e5-ec91b9c05ed2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hfv97\" (UID: \"a17fa5e6-92c3-4374-83e5-ec91b9c05ed2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.479346 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a17fa5e6-92c3-4374-83e5-ec91b9c05ed2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hfv97\" (UID: \"a17fa5e6-92c3-4374-83e5-ec91b9c05ed2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.479434 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a17fa5e6-92c3-4374-83e5-ec91b9c05ed2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hfv97\" (UID: \"a17fa5e6-92c3-4374-83e5-ec91b9c05ed2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.479558 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a17fa5e6-92c3-4374-83e5-ec91b9c05ed2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hfv97\" (UID: \"a17fa5e6-92c3-4374-83e5-ec91b9c05ed2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.480919 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a17fa5e6-92c3-4374-83e5-ec91b9c05ed2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hfv97\" (UID: \"a17fa5e6-92c3-4374-83e5-ec91b9c05ed2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.487738 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a17fa5e6-92c3-4374-83e5-ec91b9c05ed2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hfv97\" (UID: \"a17fa5e6-92c3-4374-83e5-ec91b9c05ed2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.503442 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a17fa5e6-92c3-4374-83e5-ec91b9c05ed2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hfv97\" (UID: \"a17fa5e6-92c3-4374-83e5-ec91b9c05ed2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.658058 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.865470 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" event={"ID":"a17fa5e6-92c3-4374-83e5-ec91b9c05ed2","Type":"ContainerStarted","Data":"ba564795a7fd6b4849252a7558801f69cf0b16c0585d205c0e6aea6dfe2d8e1c"} Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.865575 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" event={"ID":"a17fa5e6-92c3-4374-83e5-ec91b9c05ed2","Type":"ContainerStarted","Data":"ec65729d20f1a6dbaa5dd4a43204683e7d6cf1837c59df0b028ff79d9a6d704c"} Jan 23 16:18:34 crc kubenswrapper[4718]: I0123 16:18:34.894415 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hfv97" podStartSLOduration=96.894389008 podStartE2EDuration="1m36.894389008s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:18:34.891847799 +0000 UTC m=+116.039089980" watchObservedRunningTime="2026-01-23 16:18:34.894389008 +0000 UTC m=+116.041631029" Jan 23 16:18:35 crc kubenswrapper[4718]: I0123 16:18:35.140447 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:35 crc kubenswrapper[4718]: I0123 16:18:35.140532 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:35 crc kubenswrapper[4718]: I0123 16:18:35.140964 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:35 crc kubenswrapper[4718]: I0123 16:18:35.141062 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:35 crc kubenswrapper[4718]: E0123 16:18:35.141193 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:35 crc kubenswrapper[4718]: E0123 16:18:35.141285 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:35 crc kubenswrapper[4718]: E0123 16:18:35.141495 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:35 crc kubenswrapper[4718]: E0123 16:18:35.141545 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:35 crc kubenswrapper[4718]: I0123 16:18:35.141747 4718 scope.go:117] "RemoveContainer" containerID="35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32" Jan 23 16:18:35 crc kubenswrapper[4718]: I0123 16:18:35.165593 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:58:24.103015122 +0000 UTC Jan 23 16:18:35 crc kubenswrapper[4718]: I0123 16:18:35.166332 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 23 16:18:35 crc kubenswrapper[4718]: I0123 16:18:35.177956 4718 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 16:18:35 crc kubenswrapper[4718]: I0123 16:18:35.869662 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovnkube-controller/3.log" Jan 23 16:18:35 crc kubenswrapper[4718]: I0123 16:18:35.872985 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerStarted","Data":"6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462"} Jan 23 16:18:35 crc kubenswrapper[4718]: I0123 16:18:35.873547 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:18:35 crc kubenswrapper[4718]: I0123 16:18:35.904185 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" podStartSLOduration=97.904166446 podStartE2EDuration="1m37.904166446s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:18:35.903980311 +0000 UTC m=+117.051222382" watchObservedRunningTime="2026-01-23 16:18:35.904166446 +0000 UTC m=+117.051408437" Jan 23 16:18:36 crc kubenswrapper[4718]: I0123 16:18:36.109805 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dppxp"] Jan 23 16:18:36 crc kubenswrapper[4718]: I0123 16:18:36.109942 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:36 crc kubenswrapper[4718]: E0123 16:18:36.110074 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:37 crc kubenswrapper[4718]: I0123 16:18:37.139569 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:37 crc kubenswrapper[4718]: I0123 16:18:37.139738 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:37 crc kubenswrapper[4718]: I0123 16:18:37.139811 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:37 crc kubenswrapper[4718]: E0123 16:18:37.139904 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:37 crc kubenswrapper[4718]: E0123 16:18:37.140138 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:37 crc kubenswrapper[4718]: E0123 16:18:37.140220 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:38 crc kubenswrapper[4718]: I0123 16:18:38.140119 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:38 crc kubenswrapper[4718]: E0123 16:18:38.140472 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:39 crc kubenswrapper[4718]: E0123 16:18:39.081519 4718 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 23 16:18:39 crc kubenswrapper[4718]: I0123 16:18:39.139730 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:39 crc kubenswrapper[4718]: I0123 16:18:39.139774 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:39 crc kubenswrapper[4718]: I0123 16:18:39.139774 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:39 crc kubenswrapper[4718]: E0123 16:18:39.141705 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:39 crc kubenswrapper[4718]: E0123 16:18:39.142139 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:39 crc kubenswrapper[4718]: E0123 16:18:39.142352 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:39 crc kubenswrapper[4718]: E0123 16:18:39.250504 4718 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:18:40 crc kubenswrapper[4718]: I0123 16:18:40.139292 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:40 crc kubenswrapper[4718]: E0123 16:18:40.139500 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:41 crc kubenswrapper[4718]: I0123 16:18:41.140411 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:41 crc kubenswrapper[4718]: I0123 16:18:41.140452 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:41 crc kubenswrapper[4718]: I0123 16:18:41.140627 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:41 crc kubenswrapper[4718]: E0123 16:18:41.140725 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:41 crc kubenswrapper[4718]: E0123 16:18:41.141561 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:41 crc kubenswrapper[4718]: E0123 16:18:41.142194 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:42 crc kubenswrapper[4718]: I0123 16:18:42.139582 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:42 crc kubenswrapper[4718]: E0123 16:18:42.139762 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:43 crc kubenswrapper[4718]: I0123 16:18:43.139592 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:43 crc kubenswrapper[4718]: E0123 16:18:43.139949 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:43 crc kubenswrapper[4718]: I0123 16:18:43.140010 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:43 crc kubenswrapper[4718]: I0123 16:18:43.140102 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:43 crc kubenswrapper[4718]: E0123 16:18:43.140272 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:43 crc kubenswrapper[4718]: E0123 16:18:43.140769 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:43 crc kubenswrapper[4718]: I0123 16:18:43.140963 4718 scope.go:117] "RemoveContainer" containerID="20405d329c7193cd8e7cb3a92d8ead168665e11059b3a658afb9533dad110282" Jan 23 16:18:43 crc kubenswrapper[4718]: I0123 16:18:43.907799 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tb79v_d2a07769-1921-4484-b1cd-28b23487bb39/kube-multus/1.log" Jan 23 16:18:43 crc kubenswrapper[4718]: I0123 16:18:43.907902 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tb79v" event={"ID":"d2a07769-1921-4484-b1cd-28b23487bb39","Type":"ContainerStarted","Data":"057f3efdeba4092338077df8e639b0ee0cb35cf8330d07c93524611cb0317bed"} Jan 23 16:18:44 crc kubenswrapper[4718]: I0123 16:18:44.139918 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:44 crc kubenswrapper[4718]: E0123 16:18:44.140130 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:44 crc kubenswrapper[4718]: E0123 16:18:44.252691 4718 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:18:45 crc kubenswrapper[4718]: I0123 16:18:45.139943 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:45 crc kubenswrapper[4718]: I0123 16:18:45.140037 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:45 crc kubenswrapper[4718]: I0123 16:18:45.140047 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:45 crc kubenswrapper[4718]: E0123 16:18:45.140170 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:45 crc kubenswrapper[4718]: E0123 16:18:45.140300 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:45 crc kubenswrapper[4718]: E0123 16:18:45.140594 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:46 crc kubenswrapper[4718]: I0123 16:18:46.139229 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:46 crc kubenswrapper[4718]: E0123 16:18:46.139752 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:47 crc kubenswrapper[4718]: I0123 16:18:47.140049 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:47 crc kubenswrapper[4718]: I0123 16:18:47.140139 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:47 crc kubenswrapper[4718]: E0123 16:18:47.140181 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:47 crc kubenswrapper[4718]: I0123 16:18:47.140245 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:47 crc kubenswrapper[4718]: E0123 16:18:47.140423 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:47 crc kubenswrapper[4718]: E0123 16:18:47.140550 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:48 crc kubenswrapper[4718]: I0123 16:18:48.139620 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:48 crc kubenswrapper[4718]: E0123 16:18:48.139884 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dppxp" podUID="593a4237-c13e-4403-b139-f32b552ca770" Jan 23 16:18:49 crc kubenswrapper[4718]: I0123 16:18:49.141920 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:49 crc kubenswrapper[4718]: E0123 16:18:49.142122 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:18:49 crc kubenswrapper[4718]: I0123 16:18:49.142361 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:49 crc kubenswrapper[4718]: I0123 16:18:49.142384 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:49 crc kubenswrapper[4718]: E0123 16:18:49.142748 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:18:49 crc kubenswrapper[4718]: E0123 16:18:49.143408 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:18:50 crc kubenswrapper[4718]: I0123 16:18:50.139265 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:18:50 crc kubenswrapper[4718]: I0123 16:18:50.142401 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 16:18:50 crc kubenswrapper[4718]: I0123 16:18:50.143782 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 16:18:51 crc kubenswrapper[4718]: I0123 16:18:51.140982 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:18:51 crc kubenswrapper[4718]: I0123 16:18:51.141944 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:18:51 crc kubenswrapper[4718]: I0123 16:18:51.142389 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:18:51 crc kubenswrapper[4718]: I0123 16:18:51.147215 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 16:18:51 crc kubenswrapper[4718]: I0123 16:18:51.147453 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 16:18:51 crc kubenswrapper[4718]: I0123 16:18:51.147528 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 16:18:51 crc kubenswrapper[4718]: I0123 16:18:51.150074 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.560150 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.643144 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.644341 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.646144 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qwldq"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.646947 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.648381 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9p5vn"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.649573 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5vn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.651555 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.652387 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.653189 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l58kb"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.654957 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.657611 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfsc2"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.657740 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.658827 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfsc2" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.658863 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l58kb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.659059 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.662315 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ss8qc"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.663032 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.663616 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-f6czp"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.664243 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.666424 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bvjhk"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.667011 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.669002 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.669222 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.669356 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.670987 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.672001 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.679581 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-lt6zb"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.684026 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7m6gn"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.685226 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7m6gn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.686945 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.687463 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hrs87"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.703832 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.703906 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.704222 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.704380 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.704601 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.704652 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.704663 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.704762 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.705146 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.705219 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.705366 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.705445 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.705514 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.705138 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.705604 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.705606 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.705656 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.705709 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.705594 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.706217 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.706260 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.706385 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.706407 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.706426 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.706669 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.706675 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.706895 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hzrpf"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.707420 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.706909 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.707717 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bnnc"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.706987 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.707028 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.707083 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.708218 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.707098 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.707192 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.708343 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.707205 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.707280 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.708512 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.708550 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.708661 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.708779 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.708781 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.708943 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.708971 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.709068 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.709102 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hmshq"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.709111 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.709153 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.709239 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.709287 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.709082 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.709574 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.709574 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bnnc" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.709739 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.710117 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.710228 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-69wt2"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.710932 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-69wt2" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.711354 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vqvbg"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.711365 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.711438 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.711695 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.711932 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.712351 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.712891 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.713050 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.714921 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.715225 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.718113 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.718310 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.718405 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.718532 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.718535 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.718714 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.718963 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.719414 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.721143 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.721453 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-stp9k"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.722002 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.722168 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-stp9k" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.722594 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.722681 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.722711 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.722865 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.722957 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2m4vg"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.723881 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.724956 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-skvf4"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.725772 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-skvf4" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.749560 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2m4vg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.754061 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.754120 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.754666 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.761029 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.761488 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.777168 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.777601 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.777813 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.777911 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.778216 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.778491 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.778609 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnchs\" (UniqueName: \"kubernetes.io/projected/0c5d01e6-e826-4f29-9160-ab28e19020b9-kube-api-access-mnchs\") pod \"openshift-config-operator-7777fb866f-f6czp\" (UID: \"0c5d01e6-e826-4f29-9160-ab28e19020b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.778866 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ed9dcbf-5502-4797-9b65-ff900aa065d8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9p5vn\" (UID: \"8ed9dcbf-5502-4797-9b65-ff900aa065d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5vn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.778941 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779026 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779102 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c015fcec-cc64-4d3c-bddd-df7d887d0ea3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kfsc2\" (UID: \"c015fcec-cc64-4d3c-bddd-df7d887d0ea3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfsc2" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779164 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779205 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0893e7ff-b1d9-4227-ae44-a873d8355a70-console-oauth-config\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779251 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/86444525-e481-4e9f-9a52-471432286641-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c8pmb\" (UID: \"86444525-e481-4e9f-9a52-471432286641\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779278 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ededf03-fe00-4583-b2ee-ef2a3f301f79-serving-cert\") pod \"authentication-operator-69f744f599-ss8qc\" (UID: \"5ededf03-fe00-4583-b2ee-ef2a3f301f79\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779307 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a29dbc90-997d-4f83-8151-1cfcca661070-audit-dir\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779335 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-service-ca\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779358 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfwsj\" (UniqueName: \"kubernetes.io/projected/0893e7ff-b1d9-4227-ae44-a873d8355a70-kube-api-access-vfwsj\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779378 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ededf03-fe00-4583-b2ee-ef2a3f301f79-config\") pod \"authentication-operator-69f744f599-ss8qc\" (UID: \"5ededf03-fe00-4583-b2ee-ef2a3f301f79\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779385 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779399 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be7521bc-a519-439d-9ae3-4fb10368e494-serving-cert\") pod \"route-controller-manager-6576b87f9c-42bjf\" (UID: \"be7521bc-a519-439d-9ae3-4fb10368e494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779468 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31032e25-5c02-40f3-8058-47ada861d728-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-l58kb\" (UID: \"31032e25-5c02-40f3-8058-47ada861d728\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l58kb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779511 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31032e25-5c02-40f3-8058-47ada861d728-config\") pod \"openshift-apiserver-operator-796bbdcf4f-l58kb\" (UID: \"31032e25-5c02-40f3-8058-47ada861d728\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l58kb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779544 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d4557"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779579 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e179a011-4637-42d5-9679-e910440d25ac-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779649 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a29dbc90-997d-4f83-8151-1cfcca661070-audit\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779698 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae4b9e6a-5c1c-4cb5-899e-6c451c179b77-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7m6gn\" (UID: \"ae4b9e6a-5c1c-4cb5-899e-6c451c179b77\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7m6gn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779744 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-console-config\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779774 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgmp2\" (UniqueName: \"kubernetes.io/projected/31032e25-5c02-40f3-8058-47ada861d728-kube-api-access-xgmp2\") pod \"openshift-apiserver-operator-796bbdcf4f-l58kb\" (UID: \"31032e25-5c02-40f3-8058-47ada861d728\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l58kb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779812 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779842 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779873 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ededf03-fe00-4583-b2ee-ef2a3f301f79-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ss8qc\" (UID: \"5ededf03-fe00-4583-b2ee-ef2a3f301f79\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779901 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ededf03-fe00-4583-b2ee-ef2a3f301f79-service-ca-bundle\") pod \"authentication-operator-69f744f599-ss8qc\" (UID: \"5ededf03-fe00-4583-b2ee-ef2a3f301f79\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779933 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxv65\" (UniqueName: \"kubernetes.io/projected/86444525-e481-4e9f-9a52-471432286641-kube-api-access-cxv65\") pod \"cluster-image-registry-operator-dc59b4c8b-c8pmb\" (UID: \"86444525-e481-4e9f-9a52-471432286641\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779961 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.779999 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a29dbc90-997d-4f83-8151-1cfcca661070-config\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780024 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7521bc-a519-439d-9ae3-4fb10368e494-config\") pod \"route-controller-manager-6576b87f9c-42bjf\" (UID: \"be7521bc-a519-439d-9ae3-4fb10368e494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780061 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a29dbc90-997d-4f83-8151-1cfcca661070-image-import-ca\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780088 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780113 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780144 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e179a011-4637-42d5-9679-e910440d25ac-audit-dir\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780168 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwkmt\" (UniqueName: \"kubernetes.io/projected/e179a011-4637-42d5-9679-e910440d25ac-kube-api-access-gwkmt\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780194 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a29dbc90-997d-4f83-8151-1cfcca661070-serving-cert\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780250 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c5d01e6-e826-4f29-9160-ab28e19020b9-serving-cert\") pod \"openshift-config-operator-7777fb866f-f6czp\" (UID: \"0c5d01e6-e826-4f29-9160-ab28e19020b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780279 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0c5d01e6-e826-4f29-9160-ab28e19020b9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-f6czp\" (UID: \"0c5d01e6-e826-4f29-9160-ab28e19020b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780313 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2c947b7-81d9-4041-9b78-668f44427eb9-client-ca\") pod \"controller-manager-879f6c89f-bvjhk\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780367 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6zd2\" (UniqueName: \"kubernetes.io/projected/c96e05b0-db9f-4670-839d-f15b53eeffc6-kube-api-access-f6zd2\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780386 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d4557" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780406 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-trusted-ca-bundle\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780438 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crf7v\" (UniqueName: \"kubernetes.io/projected/5ededf03-fe00-4583-b2ee-ef2a3f301f79-kube-api-access-crf7v\") pod \"authentication-operator-69f744f599-ss8qc\" (UID: \"5ededf03-fe00-4583-b2ee-ef2a3f301f79\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780468 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae4b9e6a-5c1c-4cb5-899e-6c451c179b77-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7m6gn\" (UID: \"ae4b9e6a-5c1c-4cb5-899e-6c451c179b77\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7m6gn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780497 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a29dbc90-997d-4f83-8151-1cfcca661070-node-pullsecrets\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780528 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0893e7ff-b1d9-4227-ae44-a873d8355a70-console-serving-cert\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780546 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780559 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-oauth-serving-cert\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780590 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a29dbc90-997d-4f83-8151-1cfcca661070-etcd-serving-ca\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780680 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e179a011-4637-42d5-9679-e910440d25ac-audit-policies\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780710 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e179a011-4637-42d5-9679-e910440d25ac-serving-cert\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780742 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-audit-policies\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780768 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c947b7-81d9-4041-9b78-668f44427eb9-config\") pod \"controller-manager-879f6c89f-bvjhk\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780801 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e179a011-4637-42d5-9679-e910440d25ac-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780829 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a29dbc90-997d-4f83-8151-1cfcca661070-etcd-client\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780836 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vppcb"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780856 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44t8v\" (UniqueName: \"kubernetes.io/projected/be7521bc-a519-439d-9ae3-4fb10368e494-kube-api-access-44t8v\") pod \"route-controller-manager-6576b87f9c-42bjf\" (UID: \"be7521bc-a519-439d-9ae3-4fb10368e494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780892 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8ed9dcbf-5502-4797-9b65-ff900aa065d8-images\") pod \"machine-api-operator-5694c8668f-9p5vn\" (UID: \"8ed9dcbf-5502-4797-9b65-ff900aa065d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5vn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780924 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg4zh\" (UniqueName: \"kubernetes.io/projected/ae4b9e6a-5c1c-4cb5-899e-6c451c179b77-kube-api-access-zg4zh\") pod \"openshift-controller-manager-operator-756b6f6bc6-7m6gn\" (UID: \"ae4b9e6a-5c1c-4cb5-899e-6c451c179b77\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7m6gn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780955 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.780988 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a29dbc90-997d-4f83-8151-1cfcca661070-encryption-config\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781023 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2cl4\" (UniqueName: \"kubernetes.io/projected/b2c947b7-81d9-4041-9b78-668f44427eb9-kube-api-access-g2cl4\") pod \"controller-manager-879f6c89f-bvjhk\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781054 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lksk2\" (UniqueName: \"kubernetes.io/projected/8ed9dcbf-5502-4797-9b65-ff900aa065d8-kube-api-access-lksk2\") pod \"machine-api-operator-5694c8668f-9p5vn\" (UID: \"8ed9dcbf-5502-4797-9b65-ff900aa065d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5vn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781097 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/86444525-e481-4e9f-9a52-471432286641-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c8pmb\" (UID: \"86444525-e481-4e9f-9a52-471432286641\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781131 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781160 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781193 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcbgt\" (UniqueName: \"kubernetes.io/projected/c015fcec-cc64-4d3c-bddd-df7d887d0ea3-kube-api-access-vcbgt\") pod \"cluster-samples-operator-665b6dd947-kfsc2\" (UID: \"c015fcec-cc64-4d3c-bddd-df7d887d0ea3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfsc2" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781229 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781259 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed9dcbf-5502-4797-9b65-ff900aa065d8-config\") pod \"machine-api-operator-5694c8668f-9p5vn\" (UID: \"8ed9dcbf-5502-4797-9b65-ff900aa065d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5vn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781289 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c96e05b0-db9f-4670-839d-f15b53eeffc6-audit-dir\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781318 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a29dbc90-997d-4f83-8151-1cfcca661070-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781354 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be7521bc-a519-439d-9ae3-4fb10368e494-client-ca\") pod \"route-controller-manager-6576b87f9c-42bjf\" (UID: \"be7521bc-a519-439d-9ae3-4fb10368e494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781385 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c947b7-81d9-4041-9b78-668f44427eb9-serving-cert\") pod \"controller-manager-879f6c89f-bvjhk\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781435 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781467 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2c947b7-81d9-4041-9b78-668f44427eb9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bvjhk\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781499 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86444525-e481-4e9f-9a52-471432286641-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c8pmb\" (UID: \"86444525-e481-4e9f-9a52-471432286641\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781527 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e179a011-4637-42d5-9679-e910440d25ac-etcd-client\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781549 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vppcb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781558 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e179a011-4637-42d5-9679-e910440d25ac-encryption-config\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.781596 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz5vh\" (UniqueName: \"kubernetes.io/projected/a29dbc90-997d-4f83-8151-1cfcca661070-kube-api-access-jz5vh\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.783473 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.786961 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.787683 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.787915 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.787960 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.788099 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.788652 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9p5vn"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.788736 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.789065 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4kts8"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.789496 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4kts8" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.798101 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.798148 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.798345 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.798550 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.800036 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.800360 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.800360 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.801677 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p6xg6"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.802441 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.802978 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.804062 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.805385 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pq76l"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.805838 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pq76l" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.810706 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.810771 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.810907 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-rlqmc"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.811513 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-rlqmc" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.821737 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lplkt"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.822347 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mn2k5"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.825129 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-th59x"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.825593 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lplkt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.825682 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-th59x" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.825720 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mn2k5" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.825881 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5p7bb"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.826471 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5p7bb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.827256 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.827527 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.828108 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.841379 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2fc5k"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.843508 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fc5k" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.851760 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.856585 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.858046 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.858256 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.860086 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.862244 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.867278 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.882704 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.882753 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2c947b7-81d9-4041-9b78-668f44427eb9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bvjhk\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.882849 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86444525-e481-4e9f-9a52-471432286641-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c8pmb\" (UID: \"86444525-e481-4e9f-9a52-471432286641\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.882887 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e179a011-4637-42d5-9679-e910440d25ac-etcd-client\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.882911 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e179a011-4637-42d5-9679-e910440d25ac-encryption-config\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.882934 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2f771629-4dda-413b-9dc3-75bf6c13f310-etcd-ca\") pod \"etcd-operator-b45778765-vqvbg\" (UID: \"2f771629-4dda-413b-9dc3-75bf6c13f310\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.882954 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz5vh\" (UniqueName: \"kubernetes.io/projected/a29dbc90-997d-4f83-8151-1cfcca661070-kube-api-access-jz5vh\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.882993 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnchs\" (UniqueName: \"kubernetes.io/projected/0c5d01e6-e826-4f29-9160-ab28e19020b9-kube-api-access-mnchs\") pod \"openshift-config-operator-7777fb866f-f6czp\" (UID: \"0c5d01e6-e826-4f29-9160-ab28e19020b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883012 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ed9dcbf-5502-4797-9b65-ff900aa065d8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9p5vn\" (UID: \"8ed9dcbf-5502-4797-9b65-ff900aa065d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5vn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883049 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883068 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c015fcec-cc64-4d3c-bddd-df7d887d0ea3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kfsc2\" (UID: \"c015fcec-cc64-4d3c-bddd-df7d887d0ea3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfsc2" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883090 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0893e7ff-b1d9-4227-ae44-a873d8355a70-console-oauth-config\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883112 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/86444525-e481-4e9f-9a52-471432286641-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c8pmb\" (UID: \"86444525-e481-4e9f-9a52-471432286641\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883130 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ededf03-fe00-4583-b2ee-ef2a3f301f79-serving-cert\") pod \"authentication-operator-69f744f599-ss8qc\" (UID: \"5ededf03-fe00-4583-b2ee-ef2a3f301f79\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883148 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a29dbc90-997d-4f83-8151-1cfcca661070-audit-dir\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883163 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-service-ca\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883179 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfwsj\" (UniqueName: \"kubernetes.io/projected/0893e7ff-b1d9-4227-ae44-a873d8355a70-kube-api-access-vfwsj\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883194 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ededf03-fe00-4583-b2ee-ef2a3f301f79-config\") pod \"authentication-operator-69f744f599-ss8qc\" (UID: \"5ededf03-fe00-4583-b2ee-ef2a3f301f79\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883212 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be7521bc-a519-439d-9ae3-4fb10368e494-serving-cert\") pod \"route-controller-manager-6576b87f9c-42bjf\" (UID: \"be7521bc-a519-439d-9ae3-4fb10368e494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883256 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31032e25-5c02-40f3-8058-47ada861d728-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-l58kb\" (UID: \"31032e25-5c02-40f3-8058-47ada861d728\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l58kb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883273 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31032e25-5c02-40f3-8058-47ada861d728-config\") pod \"openshift-apiserver-operator-796bbdcf4f-l58kb\" (UID: \"31032e25-5c02-40f3-8058-47ada861d728\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l58kb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883293 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e179a011-4637-42d5-9679-e910440d25ac-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883313 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a29dbc90-997d-4f83-8151-1cfcca661070-audit\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883331 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae4b9e6a-5c1c-4cb5-899e-6c451c179b77-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7m6gn\" (UID: \"ae4b9e6a-5c1c-4cb5-899e-6c451c179b77\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7m6gn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883348 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-console-config\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883363 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgmp2\" (UniqueName: \"kubernetes.io/projected/31032e25-5c02-40f3-8058-47ada861d728-kube-api-access-xgmp2\") pod \"openshift-apiserver-operator-796bbdcf4f-l58kb\" (UID: \"31032e25-5c02-40f3-8058-47ada861d728\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l58kb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883382 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883403 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883422 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ededf03-fe00-4583-b2ee-ef2a3f301f79-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ss8qc\" (UID: \"5ededf03-fe00-4583-b2ee-ef2a3f301f79\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883449 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ededf03-fe00-4583-b2ee-ef2a3f301f79-service-ca-bundle\") pod \"authentication-operator-69f744f599-ss8qc\" (UID: \"5ededf03-fe00-4583-b2ee-ef2a3f301f79\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883467 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxv65\" (UniqueName: \"kubernetes.io/projected/86444525-e481-4e9f-9a52-471432286641-kube-api-access-cxv65\") pod \"cluster-image-registry-operator-dc59b4c8b-c8pmb\" (UID: \"86444525-e481-4e9f-9a52-471432286641\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883488 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883508 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a29dbc90-997d-4f83-8151-1cfcca661070-config\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883523 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7521bc-a519-439d-9ae3-4fb10368e494-config\") pod \"route-controller-manager-6576b87f9c-42bjf\" (UID: \"be7521bc-a519-439d-9ae3-4fb10368e494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883554 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a29dbc90-997d-4f83-8151-1cfcca661070-image-import-ca\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883571 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f771629-4dda-413b-9dc3-75bf6c13f310-etcd-service-ca\") pod \"etcd-operator-b45778765-vqvbg\" (UID: \"2f771629-4dda-413b-9dc3-75bf6c13f310\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883592 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883610 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883654 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e179a011-4637-42d5-9679-e910440d25ac-audit-dir\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883675 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwkmt\" (UniqueName: \"kubernetes.io/projected/e179a011-4637-42d5-9679-e910440d25ac-kube-api-access-gwkmt\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883705 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a29dbc90-997d-4f83-8151-1cfcca661070-serving-cert\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883785 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c5d01e6-e826-4f29-9160-ab28e19020b9-serving-cert\") pod \"openshift-config-operator-7777fb866f-f6czp\" (UID: \"0c5d01e6-e826-4f29-9160-ab28e19020b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883920 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0c5d01e6-e826-4f29-9160-ab28e19020b9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-f6czp\" (UID: \"0c5d01e6-e826-4f29-9160-ab28e19020b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.883973 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2c947b7-81d9-4041-9b78-668f44427eb9-client-ca\") pod \"controller-manager-879f6c89f-bvjhk\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884042 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crf7v\" (UniqueName: \"kubernetes.io/projected/5ededf03-fe00-4583-b2ee-ef2a3f301f79-kube-api-access-crf7v\") pod \"authentication-operator-69f744f599-ss8qc\" (UID: \"5ededf03-fe00-4583-b2ee-ef2a3f301f79\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884077 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6zd2\" (UniqueName: \"kubernetes.io/projected/c96e05b0-db9f-4670-839d-f15b53eeffc6-kube-api-access-f6zd2\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884098 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-trusted-ca-bundle\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884113 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a29dbc90-997d-4f83-8151-1cfcca661070-node-pullsecrets\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884133 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae4b9e6a-5c1c-4cb5-899e-6c451c179b77-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7m6gn\" (UID: \"ae4b9e6a-5c1c-4cb5-899e-6c451c179b77\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7m6gn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884210 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0893e7ff-b1d9-4227-ae44-a873d8355a70-console-serving-cert\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884228 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-oauth-serving-cert\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884244 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a29dbc90-997d-4f83-8151-1cfcca661070-etcd-serving-ca\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884265 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e179a011-4637-42d5-9679-e910440d25ac-audit-policies\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884324 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e179a011-4637-42d5-9679-e910440d25ac-serving-cert\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884348 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-audit-policies\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884428 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c947b7-81d9-4041-9b78-668f44427eb9-config\") pod \"controller-manager-879f6c89f-bvjhk\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884450 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e179a011-4637-42d5-9679-e910440d25ac-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884485 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2f771629-4dda-413b-9dc3-75bf6c13f310-etcd-client\") pod \"etcd-operator-b45778765-vqvbg\" (UID: \"2f771629-4dda-413b-9dc3-75bf6c13f310\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884509 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a29dbc90-997d-4f83-8151-1cfcca661070-etcd-client\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884529 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44t8v\" (UniqueName: \"kubernetes.io/projected/be7521bc-a519-439d-9ae3-4fb10368e494-kube-api-access-44t8v\") pod \"route-controller-manager-6576b87f9c-42bjf\" (UID: \"be7521bc-a519-439d-9ae3-4fb10368e494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884548 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8ed9dcbf-5502-4797-9b65-ff900aa065d8-images\") pod \"machine-api-operator-5694c8668f-9p5vn\" (UID: \"8ed9dcbf-5502-4797-9b65-ff900aa065d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5vn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884566 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg4zh\" (UniqueName: \"kubernetes.io/projected/ae4b9e6a-5c1c-4cb5-899e-6c451c179b77-kube-api-access-zg4zh\") pod \"openshift-controller-manager-operator-756b6f6bc6-7m6gn\" (UID: \"ae4b9e6a-5c1c-4cb5-899e-6c451c179b77\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7m6gn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884582 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884597 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a29dbc90-997d-4f83-8151-1cfcca661070-encryption-config\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884614 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2cl4\" (UniqueName: \"kubernetes.io/projected/b2c947b7-81d9-4041-9b78-668f44427eb9-kube-api-access-g2cl4\") pod \"controller-manager-879f6c89f-bvjhk\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884686 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lksk2\" (UniqueName: \"kubernetes.io/projected/8ed9dcbf-5502-4797-9b65-ff900aa065d8-kube-api-access-lksk2\") pod \"machine-api-operator-5694c8668f-9p5vn\" (UID: \"8ed9dcbf-5502-4797-9b65-ff900aa065d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5vn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884706 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f771629-4dda-413b-9dc3-75bf6c13f310-serving-cert\") pod \"etcd-operator-b45778765-vqvbg\" (UID: \"2f771629-4dda-413b-9dc3-75bf6c13f310\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884723 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6vcz\" (UniqueName: \"kubernetes.io/projected/2f771629-4dda-413b-9dc3-75bf6c13f310-kube-api-access-p6vcz\") pod \"etcd-operator-b45778765-vqvbg\" (UID: \"2f771629-4dda-413b-9dc3-75bf6c13f310\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884742 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcbgt\" (UniqueName: \"kubernetes.io/projected/c015fcec-cc64-4d3c-bddd-df7d887d0ea3-kube-api-access-vcbgt\") pod \"cluster-samples-operator-665b6dd947-kfsc2\" (UID: \"c015fcec-cc64-4d3c-bddd-df7d887d0ea3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfsc2" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884793 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/86444525-e481-4e9f-9a52-471432286641-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c8pmb\" (UID: \"86444525-e481-4e9f-9a52-471432286641\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884812 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884883 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884947 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884968 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed9dcbf-5502-4797-9b65-ff900aa065d8-config\") pod \"machine-api-operator-5694c8668f-9p5vn\" (UID: \"8ed9dcbf-5502-4797-9b65-ff900aa065d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5vn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.884987 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c96e05b0-db9f-4670-839d-f15b53eeffc6-audit-dir\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.885007 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a29dbc90-997d-4f83-8151-1cfcca661070-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.885027 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c947b7-81d9-4041-9b78-668f44427eb9-serving-cert\") pod \"controller-manager-879f6c89f-bvjhk\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.885044 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be7521bc-a519-439d-9ae3-4fb10368e494-client-ca\") pod \"route-controller-manager-6576b87f9c-42bjf\" (UID: \"be7521bc-a519-439d-9ae3-4fb10368e494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.885061 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f771629-4dda-413b-9dc3-75bf6c13f310-config\") pod \"etcd-operator-b45778765-vqvbg\" (UID: \"2f771629-4dda-413b-9dc3-75bf6c13f310\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.885165 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31032e25-5c02-40f3-8058-47ada861d728-config\") pod \"openshift-apiserver-operator-796bbdcf4f-l58kb\" (UID: \"31032e25-5c02-40f3-8058-47ada861d728\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l58kb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.885748 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a29dbc90-997d-4f83-8151-1cfcca661070-audit-dir\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.886031 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.885286 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qwldq"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.886427 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ededf03-fe00-4583-b2ee-ef2a3f301f79-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ss8qc\" (UID: \"5ededf03-fe00-4583-b2ee-ef2a3f301f79\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.886466 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-n58ck"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.886974 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a29dbc90-997d-4f83-8151-1cfcca661070-config\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.887474 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.887488 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.887705 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-n58ck" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.887825 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7521bc-a519-439d-9ae3-4fb10368e494-config\") pod \"route-controller-manager-6576b87f9c-42bjf\" (UID: \"be7521bc-a519-439d-9ae3-4fb10368e494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.888152 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l58kb"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.888473 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a29dbc90-997d-4f83-8151-1cfcca661070-image-import-ca\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.889553 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ededf03-fe00-4583-b2ee-ef2a3f301f79-service-ca-bundle\") pod \"authentication-operator-69f744f599-ss8qc\" (UID: \"5ededf03-fe00-4583-b2ee-ef2a3f301f79\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.892187 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e179a011-4637-42d5-9679-e910440d25ac-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.893283 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.893359 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e179a011-4637-42d5-9679-e910440d25ac-audit-dir\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.893833 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfsc2"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.893885 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bvjhk"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.893897 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-lt6zb"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.896272 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/86444525-e481-4e9f-9a52-471432286641-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c8pmb\" (UID: \"86444525-e481-4e9f-9a52-471432286641\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.896389 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c015fcec-cc64-4d3c-bddd-df7d887d0ea3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kfsc2\" (UID: \"c015fcec-cc64-4d3c-bddd-df7d887d0ea3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfsc2" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.896807 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a29dbc90-997d-4f83-8151-1cfcca661070-audit\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.897848 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-service-ca\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.898991 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.899348 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ededf03-fe00-4583-b2ee-ef2a3f301f79-config\") pod \"authentication-operator-69f744f599-ss8qc\" (UID: \"5ededf03-fe00-4583-b2ee-ef2a3f301f79\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.899767 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8ed9dcbf-5502-4797-9b65-ff900aa065d8-images\") pod \"machine-api-operator-5694c8668f-9p5vn\" (UID: \"8ed9dcbf-5502-4797-9b65-ff900aa065d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5vn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.899826 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.900181 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0c5d01e6-e826-4f29-9160-ab28e19020b9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-f6czp\" (UID: \"0c5d01e6-e826-4f29-9160-ab28e19020b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.902047 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2m4vg"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.902083 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4kts8"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.902094 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-skvf4"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.902116 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2c947b7-81d9-4041-9b78-668f44427eb9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bvjhk\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.902472 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0893e7ff-b1d9-4227-ae44-a873d8355a70-console-oauth-config\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.902944 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.902958 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be7521bc-a519-439d-9ae3-4fb10368e494-serving-cert\") pod \"route-controller-manager-6576b87f9c-42bjf\" (UID: \"be7521bc-a519-439d-9ae3-4fb10368e494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.903281 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-console-config\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.904797 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c5d01e6-e826-4f29-9160-ab28e19020b9-serving-cert\") pod \"openshift-config-operator-7777fb866f-f6czp\" (UID: \"0c5d01e6-e826-4f29-9160-ab28e19020b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.906418 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a29dbc90-997d-4f83-8151-1cfcca661070-serving-cert\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.906814 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae4b9e6a-5c1c-4cb5-899e-6c451c179b77-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7m6gn\" (UID: \"ae4b9e6a-5c1c-4cb5-899e-6c451c179b77\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7m6gn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.906950 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31032e25-5c02-40f3-8058-47ada861d728-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-l58kb\" (UID: \"31032e25-5c02-40f3-8058-47ada861d728\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l58kb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.907190 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a29dbc90-997d-4f83-8151-1cfcca661070-encryption-config\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.907221 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.907240 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bnnc"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.908014 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86444525-e481-4e9f-9a52-471432286641-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c8pmb\" (UID: \"86444525-e481-4e9f-9a52-471432286641\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.908061 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-audit-policies\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.908130 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ededf03-fe00-4583-b2ee-ef2a3f301f79-serving-cert\") pod \"authentication-operator-69f744f599-ss8qc\" (UID: \"5ededf03-fe00-4583-b2ee-ef2a3f301f79\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.908676 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2c947b7-81d9-4041-9b78-668f44427eb9-client-ca\") pod \"controller-manager-879f6c89f-bvjhk\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.908702 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-oauth-serving-cert\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.908767 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-69wt2"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.909147 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a29dbc90-997d-4f83-8151-1cfcca661070-etcd-serving-ca\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.909316 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae4b9e6a-5c1c-4cb5-899e-6c451c179b77-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7m6gn\" (UID: \"ae4b9e6a-5c1c-4cb5-899e-6c451c179b77\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7m6gn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.909369 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a29dbc90-997d-4f83-8151-1cfcca661070-node-pullsecrets\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.909395 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-trusted-ca-bundle\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.909610 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-f6czp"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.909843 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e179a011-4637-42d5-9679-e910440d25ac-audit-policies\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.910124 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e179a011-4637-42d5-9679-e910440d25ac-etcd-client\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.910354 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c947b7-81d9-4041-9b78-668f44427eb9-config\") pod \"controller-manager-879f6c89f-bvjhk\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.910571 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c96e05b0-db9f-4670-839d-f15b53eeffc6-audit-dir\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.910587 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e179a011-4637-42d5-9679-e910440d25ac-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.911462 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.911747 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.911771 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a29dbc90-997d-4f83-8151-1cfcca661070-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.911783 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7m6gn"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.912348 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed9dcbf-5502-4797-9b65-ff900aa065d8-config\") pod \"machine-api-operator-5694c8668f-9p5vn\" (UID: \"8ed9dcbf-5502-4797-9b65-ff900aa065d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5vn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.912371 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be7521bc-a519-439d-9ae3-4fb10368e494-client-ca\") pod \"route-controller-manager-6576b87f9c-42bjf\" (UID: \"be7521bc-a519-439d-9ae3-4fb10368e494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.912871 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e179a011-4637-42d5-9679-e910440d25ac-encryption-config\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.913088 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.913351 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ed9dcbf-5502-4797-9b65-ff900aa065d8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9p5vn\" (UID: \"8ed9dcbf-5502-4797-9b65-ff900aa065d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5vn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.913582 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.913970 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a29dbc90-997d-4f83-8151-1cfcca661070-etcd-client\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.914008 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-stp9k"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.914393 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c947b7-81d9-4041-9b78-668f44427eb9-serving-cert\") pod \"controller-manager-879f6c89f-bvjhk\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.915391 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ss8qc"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.915955 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.916712 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p6xg6"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.916746 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.917365 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.917459 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0893e7ff-b1d9-4227-ae44-a873d8355a70-console-serving-cert\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.918759 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-th59x"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.921491 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e179a011-4637-42d5-9679-e910440d25ac-serving-cert\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.922033 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mn2k5"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.923111 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hmshq"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.924091 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.925238 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vppcb"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.926552 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.926601 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ps7vw"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.928187 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-5h4nn"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.928361 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.929525 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lplkt"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.929925 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pq76l"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.929734 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5h4nn" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.933899 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d4557"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.935025 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.936071 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.937082 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-rlqmc"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.938266 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vqvbg"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.939358 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.940449 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hrs87"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.941740 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-n58ck"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.943107 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5p7bb"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.944695 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ps7vw"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.946672 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2fc5k"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.947055 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.948392 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.949367 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-lfmcg"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.950323 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lfmcg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.954503 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lfmcg"] Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.967605 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.985888 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2f771629-4dda-413b-9dc3-75bf6c13f310-etcd-client\") pod \"etcd-operator-b45778765-vqvbg\" (UID: \"2f771629-4dda-413b-9dc3-75bf6c13f310\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.985969 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f771629-4dda-413b-9dc3-75bf6c13f310-serving-cert\") pod \"etcd-operator-b45778765-vqvbg\" (UID: \"2f771629-4dda-413b-9dc3-75bf6c13f310\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.986006 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6vcz\" (UniqueName: \"kubernetes.io/projected/2f771629-4dda-413b-9dc3-75bf6c13f310-kube-api-access-p6vcz\") pod \"etcd-operator-b45778765-vqvbg\" (UID: \"2f771629-4dda-413b-9dc3-75bf6c13f310\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.986057 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f771629-4dda-413b-9dc3-75bf6c13f310-config\") pod \"etcd-operator-b45778765-vqvbg\" (UID: \"2f771629-4dda-413b-9dc3-75bf6c13f310\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.986093 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2f771629-4dda-413b-9dc3-75bf6c13f310-etcd-ca\") pod \"etcd-operator-b45778765-vqvbg\" (UID: \"2f771629-4dda-413b-9dc3-75bf6c13f310\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.986207 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f771629-4dda-413b-9dc3-75bf6c13f310-etcd-service-ca\") pod \"etcd-operator-b45778765-vqvbg\" (UID: \"2f771629-4dda-413b-9dc3-75bf6c13f310\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.986839 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.987316 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f771629-4dda-413b-9dc3-75bf6c13f310-etcd-service-ca\") pod \"etcd-operator-b45778765-vqvbg\" (UID: \"2f771629-4dda-413b-9dc3-75bf6c13f310\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.988116 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f771629-4dda-413b-9dc3-75bf6c13f310-config\") pod \"etcd-operator-b45778765-vqvbg\" (UID: \"2f771629-4dda-413b-9dc3-75bf6c13f310\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.989734 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2f771629-4dda-413b-9dc3-75bf6c13f310-etcd-ca\") pod \"etcd-operator-b45778765-vqvbg\" (UID: \"2f771629-4dda-413b-9dc3-75bf6c13f310\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.990863 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f771629-4dda-413b-9dc3-75bf6c13f310-serving-cert\") pod \"etcd-operator-b45778765-vqvbg\" (UID: \"2f771629-4dda-413b-9dc3-75bf6c13f310\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:54 crc kubenswrapper[4718]: I0123 16:18:54.991947 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2f771629-4dda-413b-9dc3-75bf6c13f310-etcd-client\") pod \"etcd-operator-b45778765-vqvbg\" (UID: \"2f771629-4dda-413b-9dc3-75bf6c13f310\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.007224 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.028099 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.047401 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.067432 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.087428 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.108576 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.127078 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.148232 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.167531 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.187863 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.208136 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.227106 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.268047 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.287089 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.308126 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.328960 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.348809 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.368375 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.387541 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.408382 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.427792 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.447847 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.466545 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.488778 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.508079 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.528484 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.548318 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.567850 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.588035 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.608691 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.642002 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.648378 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.668223 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.688196 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.707869 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.729212 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.748418 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.778896 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.788910 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.819373 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.825782 4718 request.go:700] Waited for 1.019321539s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&limit=500&resourceVersion=0 Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.828507 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.848301 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.888726 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.907761 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.928491 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.948136 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.967549 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 16:18:55 crc kubenswrapper[4718]: I0123 16:18:55.988238 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.007671 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.029079 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.048011 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.069740 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.099558 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.108062 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.127754 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.146948 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.168767 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.187916 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.207583 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.227351 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.247813 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.267479 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.287184 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.307456 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.329727 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.347446 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.366843 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.387493 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.408600 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.427944 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.448657 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.468075 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.487764 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.544277 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxv65\" (UniqueName: \"kubernetes.io/projected/86444525-e481-4e9f-9a52-471432286641-kube-api-access-cxv65\") pod \"cluster-image-registry-operator-dc59b4c8b-c8pmb\" (UID: \"86444525-e481-4e9f-9a52-471432286641\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.557783 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwkmt\" (UniqueName: \"kubernetes.io/projected/e179a011-4637-42d5-9679-e910440d25ac-kube-api-access-gwkmt\") pod \"apiserver-7bbb656c7d-g6hrd\" (UID: \"e179a011-4637-42d5-9679-e910440d25ac\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.577500 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44t8v\" (UniqueName: \"kubernetes.io/projected/be7521bc-a519-439d-9ae3-4fb10368e494-kube-api-access-44t8v\") pod \"route-controller-manager-6576b87f9c-42bjf\" (UID: \"be7521bc-a519-439d-9ae3-4fb10368e494\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.598721 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfwsj\" (UniqueName: \"kubernetes.io/projected/0893e7ff-b1d9-4227-ae44-a873d8355a70-kube-api-access-vfwsj\") pod \"console-f9d7485db-lt6zb\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.623660 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg4zh\" (UniqueName: \"kubernetes.io/projected/ae4b9e6a-5c1c-4cb5-899e-6c451c179b77-kube-api-access-zg4zh\") pod \"openshift-controller-manager-operator-756b6f6bc6-7m6gn\" (UID: \"ae4b9e6a-5c1c-4cb5-899e-6c451c179b77\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7m6gn" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.636519 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2cl4\" (UniqueName: \"kubernetes.io/projected/b2c947b7-81d9-4041-9b78-668f44427eb9-kube-api-access-g2cl4\") pod \"controller-manager-879f6c89f-bvjhk\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.655219 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lksk2\" (UniqueName: \"kubernetes.io/projected/8ed9dcbf-5502-4797-9b65-ff900aa065d8-kube-api-access-lksk2\") pod \"machine-api-operator-5694c8668f-9p5vn\" (UID: \"8ed9dcbf-5502-4797-9b65-ff900aa065d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5vn" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.658179 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.671011 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.677869 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcbgt\" (UniqueName: \"kubernetes.io/projected/c015fcec-cc64-4d3c-bddd-df7d887d0ea3-kube-api-access-vcbgt\") pod \"cluster-samples-operator-665b6dd947-kfsc2\" (UID: \"c015fcec-cc64-4d3c-bddd-df7d887d0ea3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfsc2" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.688118 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/86444525-e481-4e9f-9a52-471432286641-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c8pmb\" (UID: \"86444525-e481-4e9f-9a52-471432286641\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.710142 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgmp2\" (UniqueName: \"kubernetes.io/projected/31032e25-5c02-40f3-8058-47ada861d728-kube-api-access-xgmp2\") pod \"openshift-apiserver-operator-796bbdcf4f-l58kb\" (UID: \"31032e25-5c02-40f3-8058-47ada861d728\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l58kb" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.734574 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7m6gn" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.736866 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz5vh\" (UniqueName: \"kubernetes.io/projected/a29dbc90-997d-4f83-8151-1cfcca661070-kube-api-access-jz5vh\") pod \"apiserver-76f77b778f-qwldq\" (UID: \"a29dbc90-997d-4f83-8151-1cfcca661070\") " pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.741974 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.758522 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnchs\" (UniqueName: \"kubernetes.io/projected/0c5d01e6-e826-4f29-9160-ab28e19020b9-kube-api-access-mnchs\") pod \"openshift-config-operator-7777fb866f-f6czp\" (UID: \"0c5d01e6-e826-4f29-9160-ab28e19020b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.769171 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.772801 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crf7v\" (UniqueName: \"kubernetes.io/projected/5ededf03-fe00-4583-b2ee-ef2a3f301f79-kube-api-access-crf7v\") pod \"authentication-operator-69f744f599-ss8qc\" (UID: \"5ededf03-fe00-4583-b2ee-ef2a3f301f79\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.788465 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.794406 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6zd2\" (UniqueName: \"kubernetes.io/projected/c96e05b0-db9f-4670-839d-f15b53eeffc6-kube-api-access-f6zd2\") pod \"oauth-openshift-558db77b4-hrs87\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.805192 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.807176 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.822371 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5vn" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.826848 4718 request.go:700] Waited for 1.898103251s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&limit=500&resourceVersion=0 Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.829006 4718 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.847736 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.854736 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.867828 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.882872 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfsc2" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.888126 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.907878 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.912052 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l58kb" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.925907 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.930972 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.947559 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.948431 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.957016 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd"] Jan 23 16:18:56 crc kubenswrapper[4718]: I0123 16:18:56.974046 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.008845 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6vcz\" (UniqueName: \"kubernetes.io/projected/2f771629-4dda-413b-9dc3-75bf6c13f310-kube-api-access-p6vcz\") pod \"etcd-operator-b45778765-vqvbg\" (UID: \"2f771629-4dda-413b-9dc3-75bf6c13f310\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:57 crc kubenswrapper[4718]: W0123 16:18:57.014771 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode179a011_4637_42d5_9679_e910440d25ac.slice/crio-a468f5f062d7543194a6ad8a0a755552eecff2df56a84880e2b1ca5c53525117 WatchSource:0}: Error finding container a468f5f062d7543194a6ad8a0a755552eecff2df56a84880e2b1ca5c53525117: Status 404 returned error can't find the container with id a468f5f062d7543194a6ad8a0a755552eecff2df56a84880e2b1ca5c53525117 Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.029641 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bvjhk"] Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.049159 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.058486 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf"] Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.084893 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.088773 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7m6gn"] Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.119762 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0b8a168-87d3-47a0-8527-6252cf9743df-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2v8tf\" (UID: \"b0b8a168-87d3-47a0-8527-6252cf9743df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.119860 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6dfj\" (UniqueName: \"kubernetes.io/projected/366c0aee-b870-49b2-8500-06f6529c270c-kube-api-access-w6dfj\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.122690 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9842n\" (UniqueName: \"kubernetes.io/projected/6f3f6151-c368-4b06-a85e-a8b9d3969ca0-kube-api-access-9842n\") pod \"multus-admission-controller-857f4d67dd-vppcb\" (UID: \"6f3f6151-c368-4b06-a85e-a8b9d3969ca0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vppcb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.122732 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6e51ecf0-a72a-461c-a669-8bce49b39003-default-certificate\") pod \"router-default-5444994796-hzrpf\" (UID: \"6e51ecf0-a72a-461c-a669-8bce49b39003\") " pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.122773 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d31ba73-9659-4b08-bd23-26a4f51835bf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p6xg6\" (UID: \"2d31ba73-9659-4b08-bd23-26a4f51835bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.124198 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dntc9\" (UniqueName: \"kubernetes.io/projected/b0b8a168-87d3-47a0-8527-6252cf9743df-kube-api-access-dntc9\") pod \"ingress-operator-5b745b69d9-2v8tf\" (UID: \"b0b8a168-87d3-47a0-8527-6252cf9743df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.124226 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e51ecf0-a72a-461c-a669-8bce49b39003-service-ca-bundle\") pod \"router-default-5444994796-hzrpf\" (UID: \"6e51ecf0-a72a-461c-a669-8bce49b39003\") " pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.124247 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/895467b8-6fd4-4822-be4a-c6576d88b855-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d4557\" (UID: \"895467b8-6fd4-4822-be4a-c6576d88b855\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d4557" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.125154 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/366c0aee-b870-49b2-8500-06f6529c270c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.125175 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/366c0aee-b870-49b2-8500-06f6529c270c-registry-certificates\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.125190 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9f4j\" (UniqueName: \"kubernetes.io/projected/6e51ecf0-a72a-461c-a669-8bce49b39003-kube-api-access-f9f4j\") pod \"router-default-5444994796-hzrpf\" (UID: \"6e51ecf0-a72a-461c-a669-8bce49b39003\") " pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.125246 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqvwb\" (UniqueName: \"kubernetes.io/projected/7a72377e-a621-4ebb-b31a-7f405b218eb6-kube-api-access-nqvwb\") pod \"control-plane-machine-set-operator-78cbb6b69f-69wt2\" (UID: \"7a72377e-a621-4ebb-b31a-7f405b218eb6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-69wt2" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.125859 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/44ee7633-a090-4fc2-80ce-2e4fe0ddcad9-signing-cabundle\") pod \"service-ca-9c57cc56f-4kts8\" (UID: \"44ee7633-a090-4fc2-80ce-2e4fe0ddcad9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4kts8" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.125908 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a485827-90f5-4846-a975-b61eefef257f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-stp9k\" (UID: \"1a485827-90f5-4846-a975-b61eefef257f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-stp9k" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.125966 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk4bf\" (UniqueName: \"kubernetes.io/projected/3d8b7f97-e18e-49a9-a3a2-cfd01da217a8-kube-api-access-tk4bf\") pod \"package-server-manager-789f6589d5-9bnnc\" (UID: \"3d8b7f97-e18e-49a9-a3a2-cfd01da217a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bnnc" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.125987 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74655295-4c96-4870-b700-b98b7a1e176e-config-volume\") pod \"collect-profiles-29486415-hw59c\" (UID: \"74655295-4c96-4870-b700-b98b7a1e176e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126014 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sg5q\" (UniqueName: \"kubernetes.io/projected/74655295-4c96-4870-b700-b98b7a1e176e-kube-api-access-4sg5q\") pod \"collect-profiles-29486415-hw59c\" (UID: \"74655295-4c96-4870-b700-b98b7a1e176e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126049 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/688723bd-e83c-43b0-b21a-83cf348544bd-apiservice-cert\") pod \"packageserver-d55dfcdfc-4btm4\" (UID: \"688723bd-e83c-43b0-b21a-83cf348544bd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126088 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126117 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/366c0aee-b870-49b2-8500-06f6529c270c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126146 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6e51ecf0-a72a-461c-a669-8bce49b39003-stats-auth\") pod \"router-default-5444994796-hzrpf\" (UID: \"6e51ecf0-a72a-461c-a669-8bce49b39003\") " pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126194 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0b8a168-87d3-47a0-8527-6252cf9743df-trusted-ca\") pod \"ingress-operator-5b745b69d9-2v8tf\" (UID: \"b0b8a168-87d3-47a0-8527-6252cf9743df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126220 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74655295-4c96-4870-b700-b98b7a1e176e-secret-volume\") pod \"collect-profiles-29486415-hw59c\" (UID: \"74655295-4c96-4870-b700-b98b7a1e176e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126242 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d9f40b2-a707-4205-987b-862d9ecc5c22-config\") pod \"service-ca-operator-777779d784-pq76l\" (UID: \"3d9f40b2-a707-4205-987b-862d9ecc5c22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pq76l" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126269 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxskh\" (UniqueName: \"kubernetes.io/projected/8f83a3e0-3cbf-4c84-b243-7501b6cb6a8a-kube-api-access-rxskh\") pod \"dns-operator-744455d44c-skvf4\" (UID: \"8f83a3e0-3cbf-4c84-b243-7501b6cb6a8a\") " pod="openshift-dns-operator/dns-operator-744455d44c-skvf4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126306 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a485827-90f5-4846-a975-b61eefef257f-config\") pod \"kube-controller-manager-operator-78b949d7b-stp9k\" (UID: \"1a485827-90f5-4846-a975-b61eefef257f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-stp9k" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126397 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw5kd\" (UniqueName: \"kubernetes.io/projected/3d9f40b2-a707-4205-987b-862d9ecc5c22-kube-api-access-kw5kd\") pod \"service-ca-operator-777779d784-pq76l\" (UID: \"3d9f40b2-a707-4205-987b-862d9ecc5c22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pq76l" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126418 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/366c0aee-b870-49b2-8500-06f6529c270c-registry-tls\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126457 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpls9\" (UniqueName: \"kubernetes.io/projected/44ee7633-a090-4fc2-80ce-2e4fe0ddcad9-kube-api-access-jpls9\") pod \"service-ca-9c57cc56f-4kts8\" (UID: \"44ee7633-a090-4fc2-80ce-2e4fe0ddcad9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4kts8" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126479 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b0b8a168-87d3-47a0-8527-6252cf9743df-metrics-tls\") pod \"ingress-operator-5b745b69d9-2v8tf\" (UID: \"b0b8a168-87d3-47a0-8527-6252cf9743df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126505 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d63f14d-4ca8-40ca-bf45-52803a21c4fb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2m4vg\" (UID: \"9d63f14d-4ca8-40ca-bf45-52803a21c4fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2m4vg" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126553 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/688723bd-e83c-43b0-b21a-83cf348544bd-webhook-cert\") pod \"packageserver-d55dfcdfc-4btm4\" (UID: \"688723bd-e83c-43b0-b21a-83cf348544bd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126592 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d63f14d-4ca8-40ca-bf45-52803a21c4fb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2m4vg\" (UID: \"9d63f14d-4ca8-40ca-bf45-52803a21c4fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2m4vg" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126612 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d9f40b2-a707-4205-987b-862d9ecc5c22-serving-cert\") pod \"service-ca-operator-777779d784-pq76l\" (UID: \"3d9f40b2-a707-4205-987b-862d9ecc5c22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pq76l" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126677 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d63f14d-4ca8-40ca-bf45-52803a21c4fb-config\") pod \"kube-apiserver-operator-766d6c64bb-2m4vg\" (UID: \"9d63f14d-4ca8-40ca-bf45-52803a21c4fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2m4vg" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126718 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/366c0aee-b870-49b2-8500-06f6529c270c-bound-sa-token\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126739 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8f83a3e0-3cbf-4c84-b243-7501b6cb6a8a-metrics-tls\") pod \"dns-operator-744455d44c-skvf4\" (UID: \"8f83a3e0-3cbf-4c84-b243-7501b6cb6a8a\") " pod="openshift-dns-operator/dns-operator-744455d44c-skvf4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126799 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6f3f6151-c368-4b06-a85e-a8b9d3969ca0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vppcb\" (UID: \"6f3f6151-c368-4b06-a85e-a8b9d3969ca0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vppcb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126870 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a485827-90f5-4846-a975-b61eefef257f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-stp9k\" (UID: \"1a485827-90f5-4846-a975-b61eefef257f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-stp9k" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126899 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/895467b8-6fd4-4822-be4a-c6576d88b855-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d4557\" (UID: \"895467b8-6fd4-4822-be4a-c6576d88b855\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d4557" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126948 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a72377e-a621-4ebb-b31a-7f405b218eb6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-69wt2\" (UID: \"7a72377e-a621-4ebb-b31a-7f405b218eb6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-69wt2" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.126995 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/895467b8-6fd4-4822-be4a-c6576d88b855-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d4557\" (UID: \"895467b8-6fd4-4822-be4a-c6576d88b855\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d4557" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.127042 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/688723bd-e83c-43b0-b21a-83cf348544bd-tmpfs\") pod \"packageserver-d55dfcdfc-4btm4\" (UID: \"688723bd-e83c-43b0-b21a-83cf348544bd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.127069 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvxzm\" (UniqueName: \"kubernetes.io/projected/688723bd-e83c-43b0-b21a-83cf348544bd-kube-api-access-jvxzm\") pod \"packageserver-d55dfcdfc-4btm4\" (UID: \"688723bd-e83c-43b0-b21a-83cf348544bd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.127129 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/44ee7633-a090-4fc2-80ce-2e4fe0ddcad9-signing-key\") pod \"service-ca-9c57cc56f-4kts8\" (UID: \"44ee7633-a090-4fc2-80ce-2e4fe0ddcad9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4kts8" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.127213 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e51ecf0-a72a-461c-a669-8bce49b39003-metrics-certs\") pod \"router-default-5444994796-hzrpf\" (UID: \"6e51ecf0-a72a-461c-a669-8bce49b39003\") " pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.127270 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d8b7f97-e18e-49a9-a3a2-cfd01da217a8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9bnnc\" (UID: \"3d8b7f97-e18e-49a9-a3a2-cfd01da217a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bnnc" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.127345 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/366c0aee-b870-49b2-8500-06f6529c270c-trusted-ca\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.127391 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsxtb\" (UniqueName: \"kubernetes.io/projected/2d31ba73-9659-4b08-bd23-26a4f51835bf-kube-api-access-rsxtb\") pod \"marketplace-operator-79b997595-p6xg6\" (UID: \"2d31ba73-9659-4b08-bd23-26a4f51835bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.127451 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d31ba73-9659-4b08-bd23-26a4f51835bf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p6xg6\" (UID: \"2d31ba73-9659-4b08-bd23-26a4f51835bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" Jan 23 16:18:57 crc kubenswrapper[4718]: E0123 16:18:57.128524 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:18:57.628510327 +0000 UTC m=+138.775752318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.132806 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-lt6zb"] Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.228038 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:57 crc kubenswrapper[4718]: E0123 16:18:57.228182 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:18:57.7281512 +0000 UTC m=+138.875393191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.228483 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xkc5\" (UniqueName: \"kubernetes.io/projected/86dfe814-b236-4d53-bb9f-8974dc942f62-kube-api-access-4xkc5\") pod \"machine-approver-56656f9798-ptr79\" (UID: \"86dfe814-b236-4d53-bb9f-8974dc942f62\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.228526 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d31ba73-9659-4b08-bd23-26a4f51835bf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p6xg6\" (UID: \"2d31ba73-9659-4b08-bd23-26a4f51835bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.228558 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhtrn\" (UniqueName: \"kubernetes.io/projected/bbd71c65-e596-40fe-8f5a-86d849c44b24-kube-api-access-dhtrn\") pod \"dns-default-n58ck\" (UID: \"bbd71c65-e596-40fe-8f5a-86d849c44b24\") " pod="openshift-dns/dns-default-n58ck" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.228586 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/86dfe814-b236-4d53-bb9f-8974dc942f62-machine-approver-tls\") pod \"machine-approver-56656f9798-ptr79\" (UID: \"86dfe814-b236-4d53-bb9f-8974dc942f62\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.228611 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0b8a168-87d3-47a0-8527-6252cf9743df-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2v8tf\" (UID: \"b0b8a168-87d3-47a0-8527-6252cf9743df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.228681 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks7fh\" (UniqueName: \"kubernetes.io/projected/1b1a463f-85bc-4503-bb98-0540b914397c-kube-api-access-ks7fh\") pod \"ingress-canary-lfmcg\" (UID: \"1b1a463f-85bc-4503-bb98-0540b914397c\") " pod="openshift-ingress-canary/ingress-canary-lfmcg" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.228718 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/86dfe814-b236-4d53-bb9f-8974dc942f62-auth-proxy-config\") pod \"machine-approver-56656f9798-ptr79\" (UID: \"86dfe814-b236-4d53-bb9f-8974dc942f62\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.228743 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a2118990-95a4-4a61-8c6a-3a72bdea8642-mountpoint-dir\") pod \"csi-hostpathplugin-ps7vw\" (UID: \"a2118990-95a4-4a61-8c6a-3a72bdea8642\") " pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.228775 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6dfj\" (UniqueName: \"kubernetes.io/projected/366c0aee-b870-49b2-8500-06f6529c270c-kube-api-access-w6dfj\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.228832 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/045e9b38-2543-4b98-919f-55227f2094a9-serving-cert\") pod \"console-operator-58897d9998-5p7bb\" (UID: \"045e9b38-2543-4b98-919f-55227f2094a9\") " pod="openshift-console-operator/console-operator-58897d9998-5p7bb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.228862 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9842n\" (UniqueName: \"kubernetes.io/projected/6f3f6151-c368-4b06-a85e-a8b9d3969ca0-kube-api-access-9842n\") pod \"multus-admission-controller-857f4d67dd-vppcb\" (UID: \"6f3f6151-c368-4b06-a85e-a8b9d3969ca0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vppcb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.228887 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6e51ecf0-a72a-461c-a669-8bce49b39003-default-certificate\") pod \"router-default-5444994796-hzrpf\" (UID: \"6e51ecf0-a72a-461c-a669-8bce49b39003\") " pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.228915 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d31ba73-9659-4b08-bd23-26a4f51835bf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p6xg6\" (UID: \"2d31ba73-9659-4b08-bd23-26a4f51835bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.228948 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a2118990-95a4-4a61-8c6a-3a72bdea8642-csi-data-dir\") pod \"csi-hostpathplugin-ps7vw\" (UID: \"a2118990-95a4-4a61-8c6a-3a72bdea8642\") " pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.228992 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dntc9\" (UniqueName: \"kubernetes.io/projected/b0b8a168-87d3-47a0-8527-6252cf9743df-kube-api-access-dntc9\") pod \"ingress-operator-5b745b69d9-2v8tf\" (UID: \"b0b8a168-87d3-47a0-8527-6252cf9743df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.229020 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e51ecf0-a72a-461c-a669-8bce49b39003-service-ca-bundle\") pod \"router-default-5444994796-hzrpf\" (UID: \"6e51ecf0-a72a-461c-a669-8bce49b39003\") " pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.229050 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ec99fe0-df93-4747-917e-ed13e27f917f-srv-cert\") pod \"olm-operator-6b444d44fb-ktp9d\" (UID: \"6ec99fe0-df93-4747-917e-ed13e27f917f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.229129 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a2118990-95a4-4a61-8c6a-3a72bdea8642-socket-dir\") pod \"csi-hostpathplugin-ps7vw\" (UID: \"a2118990-95a4-4a61-8c6a-3a72bdea8642\") " pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.229507 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/895467b8-6fd4-4822-be4a-c6576d88b855-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d4557\" (UID: \"895467b8-6fd4-4822-be4a-c6576d88b855\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d4557" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.229590 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/366c0aee-b870-49b2-8500-06f6529c270c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.229614 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/366c0aee-b870-49b2-8500-06f6529c270c-registry-certificates\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.229646 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9f4j\" (UniqueName: \"kubernetes.io/projected/6e51ecf0-a72a-461c-a669-8bce49b39003-kube-api-access-f9f4j\") pod \"router-default-5444994796-hzrpf\" (UID: \"6e51ecf0-a72a-461c-a669-8bce49b39003\") " pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.229714 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-788dq\" (UniqueName: \"kubernetes.io/projected/0de3fd37-4875-4504-949f-0c5257019029-kube-api-access-788dq\") pod \"kube-storage-version-migrator-operator-b67b599dd-lplkt\" (UID: \"0de3fd37-4875-4504-949f-0c5257019029\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lplkt" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.230131 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86dfe814-b236-4d53-bb9f-8974dc942f62-config\") pod \"machine-approver-56656f9798-ptr79\" (UID: \"86dfe814-b236-4d53-bb9f-8974dc942f62\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.230182 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqvwb\" (UniqueName: \"kubernetes.io/projected/7a72377e-a621-4ebb-b31a-7f405b218eb6-kube-api-access-nqvwb\") pod \"control-plane-machine-set-operator-78cbb6b69f-69wt2\" (UID: \"7a72377e-a621-4ebb-b31a-7f405b218eb6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-69wt2" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.230217 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/44ee7633-a090-4fc2-80ce-2e4fe0ddcad9-signing-cabundle\") pod \"service-ca-9c57cc56f-4kts8\" (UID: \"44ee7633-a090-4fc2-80ce-2e4fe0ddcad9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4kts8" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.230240 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a485827-90f5-4846-a975-b61eefef257f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-stp9k\" (UID: \"1a485827-90f5-4846-a975-b61eefef257f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-stp9k" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.230245 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/366c0aee-b870-49b2-8500-06f6529c270c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.230431 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/895467b8-6fd4-4822-be4a-c6576d88b855-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d4557\" (UID: \"895467b8-6fd4-4822-be4a-c6576d88b855\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d4557" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.230614 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bbd71c65-e596-40fe-8f5a-86d849c44b24-metrics-tls\") pod \"dns-default-n58ck\" (UID: \"bbd71c65-e596-40fe-8f5a-86d849c44b24\") " pod="openshift-dns/dns-default-n58ck" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.230722 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74655295-4c96-4870-b700-b98b7a1e176e-config-volume\") pod \"collect-profiles-29486415-hw59c\" (UID: \"74655295-4c96-4870-b700-b98b7a1e176e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.230764 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbd71c65-e596-40fe-8f5a-86d849c44b24-config-volume\") pod \"dns-default-n58ck\" (UID: \"bbd71c65-e596-40fe-8f5a-86d849c44b24\") " pod="openshift-dns/dns-default-n58ck" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.230878 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk4bf\" (UniqueName: \"kubernetes.io/projected/3d8b7f97-e18e-49a9-a3a2-cfd01da217a8-kube-api-access-tk4bf\") pod \"package-server-manager-789f6589d5-9bnnc\" (UID: \"3d8b7f97-e18e-49a9-a3a2-cfd01da217a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bnnc" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.230959 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e51ecf0-a72a-461c-a669-8bce49b39003-service-ca-bundle\") pod \"router-default-5444994796-hzrpf\" (UID: \"6e51ecf0-a72a-461c-a669-8bce49b39003\") " pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.230992 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sg5q\" (UniqueName: \"kubernetes.io/projected/74655295-4c96-4870-b700-b98b7a1e176e-kube-api-access-4sg5q\") pod \"collect-profiles-29486415-hw59c\" (UID: \"74655295-4c96-4870-b700-b98b7a1e176e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.231021 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6ec99fe0-df93-4747-917e-ed13e27f917f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ktp9d\" (UID: \"6ec99fe0-df93-4747-917e-ed13e27f917f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.231082 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/01f81b66-e3c3-411c-9f5a-db54b3a3aa1d-srv-cert\") pod \"catalog-operator-68c6474976-j57jq\" (UID: \"01f81b66-e3c3-411c-9f5a-db54b3a3aa1d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.231217 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/44ee7633-a090-4fc2-80ce-2e4fe0ddcad9-signing-cabundle\") pod \"service-ca-9c57cc56f-4kts8\" (UID: \"44ee7633-a090-4fc2-80ce-2e4fe0ddcad9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4kts8" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.231251 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/366c0aee-b870-49b2-8500-06f6529c270c-registry-certificates\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.231296 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/688723bd-e83c-43b0-b21a-83cf348544bd-apiservice-cert\") pod \"packageserver-d55dfcdfc-4btm4\" (UID: \"688723bd-e83c-43b0-b21a-83cf348544bd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.231412 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh5kr\" (UniqueName: \"kubernetes.io/projected/01f81b66-e3c3-411c-9f5a-db54b3a3aa1d-kube-api-access-fh5kr\") pod \"catalog-operator-68c6474976-j57jq\" (UID: \"01f81b66-e3c3-411c-9f5a-db54b3a3aa1d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.231557 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e125f017-7b6d-4a50-abf2-9b0b200b1ccb-certs\") pod \"machine-config-server-5h4nn\" (UID: \"e125f017-7b6d-4a50-abf2-9b0b200b1ccb\") " pod="openshift-machine-config-operator/machine-config-server-5h4nn" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.231612 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.231656 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j85x6\" (UniqueName: \"kubernetes.io/projected/34588296-9ec4-4018-ab87-cfbec5d33d98-kube-api-access-j85x6\") pod \"machine-config-controller-84d6567774-2fc5k\" (UID: \"34588296-9ec4-4018-ab87-cfbec5d33d98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fc5k" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.231675 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d4r4\" (UniqueName: \"kubernetes.io/projected/6ec99fe0-df93-4747-917e-ed13e27f917f-kube-api-access-4d4r4\") pod \"olm-operator-6b444d44fb-ktp9d\" (UID: \"6ec99fe0-df93-4747-917e-ed13e27f917f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.231693 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0de3fd37-4875-4504-949f-0c5257019029-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lplkt\" (UID: \"0de3fd37-4875-4504-949f-0c5257019029\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lplkt" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.231714 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/366c0aee-b870-49b2-8500-06f6529c270c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.231748 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d31ba73-9659-4b08-bd23-26a4f51835bf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p6xg6\" (UID: \"2d31ba73-9659-4b08-bd23-26a4f51835bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.231736 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6e51ecf0-a72a-461c-a669-8bce49b39003-stats-auth\") pod \"router-default-5444994796-hzrpf\" (UID: \"6e51ecf0-a72a-461c-a669-8bce49b39003\") " pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.231907 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0b8a168-87d3-47a0-8527-6252cf9743df-trusted-ca\") pod \"ingress-operator-5b745b69d9-2v8tf\" (UID: \"b0b8a168-87d3-47a0-8527-6252cf9743df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.231932 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74655295-4c96-4870-b700-b98b7a1e176e-secret-volume\") pod \"collect-profiles-29486415-hw59c\" (UID: \"74655295-4c96-4870-b700-b98b7a1e176e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.231951 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d9f40b2-a707-4205-987b-862d9ecc5c22-config\") pod \"service-ca-operator-777779d784-pq76l\" (UID: \"3d9f40b2-a707-4205-987b-862d9ecc5c22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pq76l" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.231979 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/05b88402-e2e8-4f32-a75e-1f434da51313-proxy-tls\") pod \"machine-config-operator-74547568cd-th59x\" (UID: \"05b88402-e2e8-4f32-a75e-1f434da51313\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-th59x" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232023 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxskh\" (UniqueName: \"kubernetes.io/projected/8f83a3e0-3cbf-4c84-b243-7501b6cb6a8a-kube-api-access-rxskh\") pod \"dns-operator-744455d44c-skvf4\" (UID: \"8f83a3e0-3cbf-4c84-b243-7501b6cb6a8a\") " pod="openshift-dns-operator/dns-operator-744455d44c-skvf4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232048 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a485827-90f5-4846-a975-b61eefef257f-config\") pod \"kube-controller-manager-operator-78b949d7b-stp9k\" (UID: \"1a485827-90f5-4846-a975-b61eefef257f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-stp9k" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232093 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b1a463f-85bc-4503-bb98-0540b914397c-cert\") pod \"ingress-canary-lfmcg\" (UID: \"1b1a463f-85bc-4503-bb98-0540b914397c\") " pod="openshift-ingress-canary/ingress-canary-lfmcg" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232132 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a2118990-95a4-4a61-8c6a-3a72bdea8642-plugins-dir\") pod \"csi-hostpathplugin-ps7vw\" (UID: \"a2118990-95a4-4a61-8c6a-3a72bdea8642\") " pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232157 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw5kd\" (UniqueName: \"kubernetes.io/projected/3d9f40b2-a707-4205-987b-862d9ecc5c22-kube-api-access-kw5kd\") pod \"service-ca-operator-777779d784-pq76l\" (UID: \"3d9f40b2-a707-4205-987b-862d9ecc5c22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pq76l" Jan 23 16:18:57 crc kubenswrapper[4718]: E0123 16:18:57.232171 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:18:57.732152512 +0000 UTC m=+138.879394693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232207 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv58t\" (UniqueName: \"kubernetes.io/projected/e125f017-7b6d-4a50-abf2-9b0b200b1ccb-kube-api-access-gv58t\") pod \"machine-config-server-5h4nn\" (UID: \"e125f017-7b6d-4a50-abf2-9b0b200b1ccb\") " pod="openshift-machine-config-operator/machine-config-server-5h4nn" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232242 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/366c0aee-b870-49b2-8500-06f6529c270c-registry-tls\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232268 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpls9\" (UniqueName: \"kubernetes.io/projected/44ee7633-a090-4fc2-80ce-2e4fe0ddcad9-kube-api-access-jpls9\") pod \"service-ca-9c57cc56f-4kts8\" (UID: \"44ee7633-a090-4fc2-80ce-2e4fe0ddcad9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4kts8" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232294 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b0b8a168-87d3-47a0-8527-6252cf9743df-metrics-tls\") pod \"ingress-operator-5b745b69d9-2v8tf\" (UID: \"b0b8a168-87d3-47a0-8527-6252cf9743df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232322 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d63f14d-4ca8-40ca-bf45-52803a21c4fb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2m4vg\" (UID: \"9d63f14d-4ca8-40ca-bf45-52803a21c4fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2m4vg" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232349 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbkkn\" (UniqueName: \"kubernetes.io/projected/c8067326-df8d-4bb1-8cb4-bca3e592a6b0-kube-api-access-rbkkn\") pod \"migrator-59844c95c7-mn2k5\" (UID: \"c8067326-df8d-4bb1-8cb4-bca3e592a6b0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mn2k5" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232374 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/05b88402-e2e8-4f32-a75e-1f434da51313-auth-proxy-config\") pod \"machine-config-operator-74547568cd-th59x\" (UID: \"05b88402-e2e8-4f32-a75e-1f434da51313\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-th59x" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232445 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/688723bd-e83c-43b0-b21a-83cf348544bd-webhook-cert\") pod \"packageserver-d55dfcdfc-4btm4\" (UID: \"688723bd-e83c-43b0-b21a-83cf348544bd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232469 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lzhf\" (UniqueName: \"kubernetes.io/projected/05b88402-e2e8-4f32-a75e-1f434da51313-kube-api-access-7lzhf\") pod \"machine-config-operator-74547568cd-th59x\" (UID: \"05b88402-e2e8-4f32-a75e-1f434da51313\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-th59x" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232500 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e125f017-7b6d-4a50-abf2-9b0b200b1ccb-node-bootstrap-token\") pod \"machine-config-server-5h4nn\" (UID: \"e125f017-7b6d-4a50-abf2-9b0b200b1ccb\") " pod="openshift-machine-config-operator/machine-config-server-5h4nn" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232527 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42svp\" (UniqueName: \"kubernetes.io/projected/a2118990-95a4-4a61-8c6a-3a72bdea8642-kube-api-access-42svp\") pod \"csi-hostpathplugin-ps7vw\" (UID: \"a2118990-95a4-4a61-8c6a-3a72bdea8642\") " pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232555 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d63f14d-4ca8-40ca-bf45-52803a21c4fb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2m4vg\" (UID: \"9d63f14d-4ca8-40ca-bf45-52803a21c4fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2m4vg" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232582 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d9f40b2-a707-4205-987b-862d9ecc5c22-serving-cert\") pod \"service-ca-operator-777779d784-pq76l\" (UID: \"3d9f40b2-a707-4205-987b-862d9ecc5c22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pq76l" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232611 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/34588296-9ec4-4018-ab87-cfbec5d33d98-proxy-tls\") pod \"machine-config-controller-84d6567774-2fc5k\" (UID: \"34588296-9ec4-4018-ab87-cfbec5d33d98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fc5k" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232688 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d63f14d-4ca8-40ca-bf45-52803a21c4fb-config\") pod \"kube-apiserver-operator-766d6c64bb-2m4vg\" (UID: \"9d63f14d-4ca8-40ca-bf45-52803a21c4fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2m4vg" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232718 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/366c0aee-b870-49b2-8500-06f6529c270c-bound-sa-token\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232742 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8f83a3e0-3cbf-4c84-b243-7501b6cb6a8a-metrics-tls\") pod \"dns-operator-744455d44c-skvf4\" (UID: \"8f83a3e0-3cbf-4c84-b243-7501b6cb6a8a\") " pod="openshift-dns-operator/dns-operator-744455d44c-skvf4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232774 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6f3f6151-c368-4b06-a85e-a8b9d3969ca0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vppcb\" (UID: \"6f3f6151-c368-4b06-a85e-a8b9d3969ca0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vppcb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232815 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc228\" (UniqueName: \"kubernetes.io/projected/045e9b38-2543-4b98-919f-55227f2094a9-kube-api-access-bc228\") pod \"console-operator-58897d9998-5p7bb\" (UID: \"045e9b38-2543-4b98-919f-55227f2094a9\") " pod="openshift-console-operator/console-operator-58897d9998-5p7bb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232867 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a485827-90f5-4846-a975-b61eefef257f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-stp9k\" (UID: \"1a485827-90f5-4846-a975-b61eefef257f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-stp9k" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232894 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/045e9b38-2543-4b98-919f-55227f2094a9-config\") pod \"console-operator-58897d9998-5p7bb\" (UID: \"045e9b38-2543-4b98-919f-55227f2094a9\") " pod="openshift-console-operator/console-operator-58897d9998-5p7bb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.233135 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a485827-90f5-4846-a975-b61eefef257f-config\") pod \"kube-controller-manager-operator-78b949d7b-stp9k\" (UID: \"1a485827-90f5-4846-a975-b61eefef257f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-stp9k" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.233422 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d9f40b2-a707-4205-987b-862d9ecc5c22-config\") pod \"service-ca-operator-777779d784-pq76l\" (UID: \"3d9f40b2-a707-4205-987b-862d9ecc5c22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pq76l" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.234388 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0b8a168-87d3-47a0-8527-6252cf9743df-trusted-ca\") pod \"ingress-operator-5b745b69d9-2v8tf\" (UID: \"b0b8a168-87d3-47a0-8527-6252cf9743df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.232269 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74655295-4c96-4870-b700-b98b7a1e176e-config-volume\") pod \"collect-profiles-29486415-hw59c\" (UID: \"74655295-4c96-4870-b700-b98b7a1e176e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.236304 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a2118990-95a4-4a61-8c6a-3a72bdea8642-registration-dir\") pod \"csi-hostpathplugin-ps7vw\" (UID: \"a2118990-95a4-4a61-8c6a-3a72bdea8642\") " pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.236362 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/895467b8-6fd4-4822-be4a-c6576d88b855-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d4557\" (UID: \"895467b8-6fd4-4822-be4a-c6576d88b855\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d4557" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.236383 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/01f81b66-e3c3-411c-9f5a-db54b3a3aa1d-profile-collector-cert\") pod \"catalog-operator-68c6474976-j57jq\" (UID: \"01f81b66-e3c3-411c-9f5a-db54b3a3aa1d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.236406 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a72377e-a621-4ebb-b31a-7f405b218eb6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-69wt2\" (UID: \"7a72377e-a621-4ebb-b31a-7f405b218eb6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-69wt2" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.236436 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de3fd37-4875-4504-949f-0c5257019029-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lplkt\" (UID: \"0de3fd37-4875-4504-949f-0c5257019029\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lplkt" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.236485 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/895467b8-6fd4-4822-be4a-c6576d88b855-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d4557\" (UID: \"895467b8-6fd4-4822-be4a-c6576d88b855\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d4557" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.236511 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/688723bd-e83c-43b0-b21a-83cf348544bd-tmpfs\") pod \"packageserver-d55dfcdfc-4btm4\" (UID: \"688723bd-e83c-43b0-b21a-83cf348544bd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.236530 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvxzm\" (UniqueName: \"kubernetes.io/projected/688723bd-e83c-43b0-b21a-83cf348544bd-kube-api-access-jvxzm\") pod \"packageserver-d55dfcdfc-4btm4\" (UID: \"688723bd-e83c-43b0-b21a-83cf348544bd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.236547 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4zwz\" (UniqueName: \"kubernetes.io/projected/14019d5a-595b-46f5-98f0-bed12ff9ab9f-kube-api-access-t4zwz\") pod \"downloads-7954f5f757-rlqmc\" (UID: \"14019d5a-595b-46f5-98f0-bed12ff9ab9f\") " pod="openshift-console/downloads-7954f5f757-rlqmc" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.236578 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/44ee7633-a090-4fc2-80ce-2e4fe0ddcad9-signing-key\") pod \"service-ca-9c57cc56f-4kts8\" (UID: \"44ee7633-a090-4fc2-80ce-2e4fe0ddcad9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4kts8" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.236598 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e51ecf0-a72a-461c-a669-8bce49b39003-metrics-certs\") pod \"router-default-5444994796-hzrpf\" (UID: \"6e51ecf0-a72a-461c-a669-8bce49b39003\") " pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.236684 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d8b7f97-e18e-49a9-a3a2-cfd01da217a8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9bnnc\" (UID: \"3d8b7f97-e18e-49a9-a3a2-cfd01da217a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bnnc" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.236709 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/366c0aee-b870-49b2-8500-06f6529c270c-trusted-ca\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.236734 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34588296-9ec4-4018-ab87-cfbec5d33d98-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2fc5k\" (UID: \"34588296-9ec4-4018-ab87-cfbec5d33d98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fc5k" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.236761 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/05b88402-e2e8-4f32-a75e-1f434da51313-images\") pod \"machine-config-operator-74547568cd-th59x\" (UID: \"05b88402-e2e8-4f32-a75e-1f434da51313\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-th59x" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.236786 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsxtb\" (UniqueName: \"kubernetes.io/projected/2d31ba73-9659-4b08-bd23-26a4f51835bf-kube-api-access-rsxtb\") pod \"marketplace-operator-79b997595-p6xg6\" (UID: \"2d31ba73-9659-4b08-bd23-26a4f51835bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.236821 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/045e9b38-2543-4b98-919f-55227f2094a9-trusted-ca\") pod \"console-operator-58897d9998-5p7bb\" (UID: \"045e9b38-2543-4b98-919f-55227f2094a9\") " pod="openshift-console-operator/console-operator-58897d9998-5p7bb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.238131 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6e51ecf0-a72a-461c-a669-8bce49b39003-default-certificate\") pod \"router-default-5444994796-hzrpf\" (UID: \"6e51ecf0-a72a-461c-a669-8bce49b39003\") " pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.239954 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d63f14d-4ca8-40ca-bf45-52803a21c4fb-config\") pod \"kube-apiserver-operator-766d6c64bb-2m4vg\" (UID: \"9d63f14d-4ca8-40ca-bf45-52803a21c4fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2m4vg" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.240764 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/688723bd-e83c-43b0-b21a-83cf348544bd-tmpfs\") pod \"packageserver-d55dfcdfc-4btm4\" (UID: \"688723bd-e83c-43b0-b21a-83cf348544bd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.242300 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6e51ecf0-a72a-461c-a669-8bce49b39003-stats-auth\") pod \"router-default-5444994796-hzrpf\" (UID: \"6e51ecf0-a72a-461c-a669-8bce49b39003\") " pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.244956 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6f3f6151-c368-4b06-a85e-a8b9d3969ca0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vppcb\" (UID: \"6f3f6151-c368-4b06-a85e-a8b9d3969ca0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vppcb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.246671 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/895467b8-6fd4-4822-be4a-c6576d88b855-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d4557\" (UID: \"895467b8-6fd4-4822-be4a-c6576d88b855\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d4557" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.249331 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74655295-4c96-4870-b700-b98b7a1e176e-secret-volume\") pod \"collect-profiles-29486415-hw59c\" (UID: \"74655295-4c96-4870-b700-b98b7a1e176e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.250599 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/688723bd-e83c-43b0-b21a-83cf348544bd-webhook-cert\") pod \"packageserver-d55dfcdfc-4btm4\" (UID: \"688723bd-e83c-43b0-b21a-83cf348544bd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.250982 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/366c0aee-b870-49b2-8500-06f6529c270c-registry-tls\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.251451 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d31ba73-9659-4b08-bd23-26a4f51835bf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p6xg6\" (UID: \"2d31ba73-9659-4b08-bd23-26a4f51835bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.251687 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/366c0aee-b870-49b2-8500-06f6529c270c-trusted-ca\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.252957 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d8b7f97-e18e-49a9-a3a2-cfd01da217a8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9bnnc\" (UID: \"3d8b7f97-e18e-49a9-a3a2-cfd01da217a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bnnc" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.256057 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/366c0aee-b870-49b2-8500-06f6529c270c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.256698 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e51ecf0-a72a-461c-a669-8bce49b39003-metrics-certs\") pod \"router-default-5444994796-hzrpf\" (UID: \"6e51ecf0-a72a-461c-a669-8bce49b39003\") " pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.265766 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b0b8a168-87d3-47a0-8527-6252cf9743df-metrics-tls\") pod \"ingress-operator-5b745b69d9-2v8tf\" (UID: \"b0b8a168-87d3-47a0-8527-6252cf9743df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.269155 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9842n\" (UniqueName: \"kubernetes.io/projected/6f3f6151-c368-4b06-a85e-a8b9d3969ca0-kube-api-access-9842n\") pod \"multus-admission-controller-857f4d67dd-vppcb\" (UID: \"6f3f6151-c368-4b06-a85e-a8b9d3969ca0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vppcb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.274299 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8f83a3e0-3cbf-4c84-b243-7501b6cb6a8a-metrics-tls\") pod \"dns-operator-744455d44c-skvf4\" (UID: \"8f83a3e0-3cbf-4c84-b243-7501b6cb6a8a\") " pod="openshift-dns-operator/dns-operator-744455d44c-skvf4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.274448 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d63f14d-4ca8-40ca-bf45-52803a21c4fb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2m4vg\" (UID: \"9d63f14d-4ca8-40ca-bf45-52803a21c4fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2m4vg" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.274747 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/44ee7633-a090-4fc2-80ce-2e4fe0ddcad9-signing-key\") pod \"service-ca-9c57cc56f-4kts8\" (UID: \"44ee7633-a090-4fc2-80ce-2e4fe0ddcad9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4kts8" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.280368 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d9f40b2-a707-4205-987b-862d9ecc5c22-serving-cert\") pod \"service-ca-operator-777779d784-pq76l\" (UID: \"3d9f40b2-a707-4205-987b-862d9ecc5c22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pq76l" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.280869 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a485827-90f5-4846-a975-b61eefef257f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-stp9k\" (UID: \"1a485827-90f5-4846-a975-b61eefef257f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-stp9k" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.281310 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a72377e-a621-4ebb-b31a-7f405b218eb6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-69wt2\" (UID: \"7a72377e-a621-4ebb-b31a-7f405b218eb6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-69wt2" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.285010 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dntc9\" (UniqueName: \"kubernetes.io/projected/b0b8a168-87d3-47a0-8527-6252cf9743df-kube-api-access-dntc9\") pod \"ingress-operator-5b745b69d9-2v8tf\" (UID: \"b0b8a168-87d3-47a0-8527-6252cf9743df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.287407 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/688723bd-e83c-43b0-b21a-83cf348544bd-apiservice-cert\") pod \"packageserver-d55dfcdfc-4btm4\" (UID: \"688723bd-e83c-43b0-b21a-83cf348544bd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.310737 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6dfj\" (UniqueName: \"kubernetes.io/projected/366c0aee-b870-49b2-8500-06f6529c270c-kube-api-access-w6dfj\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.337528 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.337763 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34588296-9ec4-4018-ab87-cfbec5d33d98-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2fc5k\" (UID: \"34588296-9ec4-4018-ab87-cfbec5d33d98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fc5k" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.337791 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/05b88402-e2e8-4f32-a75e-1f434da51313-images\") pod \"machine-config-operator-74547568cd-th59x\" (UID: \"05b88402-e2e8-4f32-a75e-1f434da51313\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-th59x" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.337818 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/045e9b38-2543-4b98-919f-55227f2094a9-trusted-ca\") pod \"console-operator-58897d9998-5p7bb\" (UID: \"045e9b38-2543-4b98-919f-55227f2094a9\") " pod="openshift-console-operator/console-operator-58897d9998-5p7bb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.337896 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xkc5\" (UniqueName: \"kubernetes.io/projected/86dfe814-b236-4d53-bb9f-8974dc942f62-kube-api-access-4xkc5\") pod \"machine-approver-56656f9798-ptr79\" (UID: \"86dfe814-b236-4d53-bb9f-8974dc942f62\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.337934 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhtrn\" (UniqueName: \"kubernetes.io/projected/bbd71c65-e596-40fe-8f5a-86d849c44b24-kube-api-access-dhtrn\") pod \"dns-default-n58ck\" (UID: \"bbd71c65-e596-40fe-8f5a-86d849c44b24\") " pod="openshift-dns/dns-default-n58ck" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.337958 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/86dfe814-b236-4d53-bb9f-8974dc942f62-machine-approver-tls\") pod \"machine-approver-56656f9798-ptr79\" (UID: \"86dfe814-b236-4d53-bb9f-8974dc942f62\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.337984 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks7fh\" (UniqueName: \"kubernetes.io/projected/1b1a463f-85bc-4503-bb98-0540b914397c-kube-api-access-ks7fh\") pod \"ingress-canary-lfmcg\" (UID: \"1b1a463f-85bc-4503-bb98-0540b914397c\") " pod="openshift-ingress-canary/ingress-canary-lfmcg" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338001 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/86dfe814-b236-4d53-bb9f-8974dc942f62-auth-proxy-config\") pod \"machine-approver-56656f9798-ptr79\" (UID: \"86dfe814-b236-4d53-bb9f-8974dc942f62\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338019 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a2118990-95a4-4a61-8c6a-3a72bdea8642-mountpoint-dir\") pod \"csi-hostpathplugin-ps7vw\" (UID: \"a2118990-95a4-4a61-8c6a-3a72bdea8642\") " pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338039 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/045e9b38-2543-4b98-919f-55227f2094a9-serving-cert\") pod \"console-operator-58897d9998-5p7bb\" (UID: \"045e9b38-2543-4b98-919f-55227f2094a9\") " pod="openshift-console-operator/console-operator-58897d9998-5p7bb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338060 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a2118990-95a4-4a61-8c6a-3a72bdea8642-csi-data-dir\") pod \"csi-hostpathplugin-ps7vw\" (UID: \"a2118990-95a4-4a61-8c6a-3a72bdea8642\") " pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338080 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ec99fe0-df93-4747-917e-ed13e27f917f-srv-cert\") pod \"olm-operator-6b444d44fb-ktp9d\" (UID: \"6ec99fe0-df93-4747-917e-ed13e27f917f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338096 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a2118990-95a4-4a61-8c6a-3a72bdea8642-socket-dir\") pod \"csi-hostpathplugin-ps7vw\" (UID: \"a2118990-95a4-4a61-8c6a-3a72bdea8642\") " pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338124 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-788dq\" (UniqueName: \"kubernetes.io/projected/0de3fd37-4875-4504-949f-0c5257019029-kube-api-access-788dq\") pod \"kube-storage-version-migrator-operator-b67b599dd-lplkt\" (UID: \"0de3fd37-4875-4504-949f-0c5257019029\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lplkt" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338142 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86dfe814-b236-4d53-bb9f-8974dc942f62-config\") pod \"machine-approver-56656f9798-ptr79\" (UID: \"86dfe814-b236-4d53-bb9f-8974dc942f62\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338185 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bbd71c65-e596-40fe-8f5a-86d849c44b24-metrics-tls\") pod \"dns-default-n58ck\" (UID: \"bbd71c65-e596-40fe-8f5a-86d849c44b24\") " pod="openshift-dns/dns-default-n58ck" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338202 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbd71c65-e596-40fe-8f5a-86d849c44b24-config-volume\") pod \"dns-default-n58ck\" (UID: \"bbd71c65-e596-40fe-8f5a-86d849c44b24\") " pod="openshift-dns/dns-default-n58ck" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338232 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6ec99fe0-df93-4747-917e-ed13e27f917f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ktp9d\" (UID: \"6ec99fe0-df93-4747-917e-ed13e27f917f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338249 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/01f81b66-e3c3-411c-9f5a-db54b3a3aa1d-srv-cert\") pod \"catalog-operator-68c6474976-j57jq\" (UID: \"01f81b66-e3c3-411c-9f5a-db54b3a3aa1d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338267 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh5kr\" (UniqueName: \"kubernetes.io/projected/01f81b66-e3c3-411c-9f5a-db54b3a3aa1d-kube-api-access-fh5kr\") pod \"catalog-operator-68c6474976-j57jq\" (UID: \"01f81b66-e3c3-411c-9f5a-db54b3a3aa1d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338283 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e125f017-7b6d-4a50-abf2-9b0b200b1ccb-certs\") pod \"machine-config-server-5h4nn\" (UID: \"e125f017-7b6d-4a50-abf2-9b0b200b1ccb\") " pod="openshift-machine-config-operator/machine-config-server-5h4nn" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338308 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j85x6\" (UniqueName: \"kubernetes.io/projected/34588296-9ec4-4018-ab87-cfbec5d33d98-kube-api-access-j85x6\") pod \"machine-config-controller-84d6567774-2fc5k\" (UID: \"34588296-9ec4-4018-ab87-cfbec5d33d98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fc5k" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338327 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d4r4\" (UniqueName: \"kubernetes.io/projected/6ec99fe0-df93-4747-917e-ed13e27f917f-kube-api-access-4d4r4\") pod \"olm-operator-6b444d44fb-ktp9d\" (UID: \"6ec99fe0-df93-4747-917e-ed13e27f917f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338345 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0de3fd37-4875-4504-949f-0c5257019029-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lplkt\" (UID: \"0de3fd37-4875-4504-949f-0c5257019029\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lplkt" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338366 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/05b88402-e2e8-4f32-a75e-1f434da51313-proxy-tls\") pod \"machine-config-operator-74547568cd-th59x\" (UID: \"05b88402-e2e8-4f32-a75e-1f434da51313\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-th59x" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338391 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b1a463f-85bc-4503-bb98-0540b914397c-cert\") pod \"ingress-canary-lfmcg\" (UID: \"1b1a463f-85bc-4503-bb98-0540b914397c\") " pod="openshift-ingress-canary/ingress-canary-lfmcg" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338410 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a2118990-95a4-4a61-8c6a-3a72bdea8642-plugins-dir\") pod \"csi-hostpathplugin-ps7vw\" (UID: \"a2118990-95a4-4a61-8c6a-3a72bdea8642\") " pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.338750 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a2118990-95a4-4a61-8c6a-3a72bdea8642-socket-dir\") pod \"csi-hostpathplugin-ps7vw\" (UID: \"a2118990-95a4-4a61-8c6a-3a72bdea8642\") " pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.339328 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv58t\" (UniqueName: \"kubernetes.io/projected/e125f017-7b6d-4a50-abf2-9b0b200b1ccb-kube-api-access-gv58t\") pod \"machine-config-server-5h4nn\" (UID: \"e125f017-7b6d-4a50-abf2-9b0b200b1ccb\") " pod="openshift-machine-config-operator/machine-config-server-5h4nn" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.339376 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbkkn\" (UniqueName: \"kubernetes.io/projected/c8067326-df8d-4bb1-8cb4-bca3e592a6b0-kube-api-access-rbkkn\") pod \"migrator-59844c95c7-mn2k5\" (UID: \"c8067326-df8d-4bb1-8cb4-bca3e592a6b0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mn2k5" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.339395 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/05b88402-e2e8-4f32-a75e-1f434da51313-auth-proxy-config\") pod \"machine-config-operator-74547568cd-th59x\" (UID: \"05b88402-e2e8-4f32-a75e-1f434da51313\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-th59x" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.339414 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lzhf\" (UniqueName: \"kubernetes.io/projected/05b88402-e2e8-4f32-a75e-1f434da51313-kube-api-access-7lzhf\") pod \"machine-config-operator-74547568cd-th59x\" (UID: \"05b88402-e2e8-4f32-a75e-1f434da51313\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-th59x" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.339434 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e125f017-7b6d-4a50-abf2-9b0b200b1ccb-node-bootstrap-token\") pod \"machine-config-server-5h4nn\" (UID: \"e125f017-7b6d-4a50-abf2-9b0b200b1ccb\") " pod="openshift-machine-config-operator/machine-config-server-5h4nn" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.339452 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42svp\" (UniqueName: \"kubernetes.io/projected/a2118990-95a4-4a61-8c6a-3a72bdea8642-kube-api-access-42svp\") pod \"csi-hostpathplugin-ps7vw\" (UID: \"a2118990-95a4-4a61-8c6a-3a72bdea8642\") " pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.339471 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/34588296-9ec4-4018-ab87-cfbec5d33d98-proxy-tls\") pod \"machine-config-controller-84d6567774-2fc5k\" (UID: \"34588296-9ec4-4018-ab87-cfbec5d33d98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fc5k" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.339498 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc228\" (UniqueName: \"kubernetes.io/projected/045e9b38-2543-4b98-919f-55227f2094a9-kube-api-access-bc228\") pod \"console-operator-58897d9998-5p7bb\" (UID: \"045e9b38-2543-4b98-919f-55227f2094a9\") " pod="openshift-console-operator/console-operator-58897d9998-5p7bb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.339523 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/045e9b38-2543-4b98-919f-55227f2094a9-config\") pod \"console-operator-58897d9998-5p7bb\" (UID: \"045e9b38-2543-4b98-919f-55227f2094a9\") " pod="openshift-console-operator/console-operator-58897d9998-5p7bb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.339540 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a2118990-95a4-4a61-8c6a-3a72bdea8642-registration-dir\") pod \"csi-hostpathplugin-ps7vw\" (UID: \"a2118990-95a4-4a61-8c6a-3a72bdea8642\") " pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.339547 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34588296-9ec4-4018-ab87-cfbec5d33d98-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2fc5k\" (UID: \"34588296-9ec4-4018-ab87-cfbec5d33d98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fc5k" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.339561 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/01f81b66-e3c3-411c-9f5a-db54b3a3aa1d-profile-collector-cert\") pod \"catalog-operator-68c6474976-j57jq\" (UID: \"01f81b66-e3c3-411c-9f5a-db54b3a3aa1d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.339579 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de3fd37-4875-4504-949f-0c5257019029-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lplkt\" (UID: \"0de3fd37-4875-4504-949f-0c5257019029\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lplkt" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.339847 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4zwz\" (UniqueName: \"kubernetes.io/projected/14019d5a-595b-46f5-98f0-bed12ff9ab9f-kube-api-access-t4zwz\") pod \"downloads-7954f5f757-rlqmc\" (UID: \"14019d5a-595b-46f5-98f0-bed12ff9ab9f\") " pod="openshift-console/downloads-7954f5f757-rlqmc" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.340055 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/05b88402-e2e8-4f32-a75e-1f434da51313-images\") pod \"machine-config-operator-74547568cd-th59x\" (UID: \"05b88402-e2e8-4f32-a75e-1f434da51313\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-th59x" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.340608 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86dfe814-b236-4d53-bb9f-8974dc942f62-config\") pod \"machine-approver-56656f9798-ptr79\" (UID: \"86dfe814-b236-4d53-bb9f-8974dc942f62\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.341649 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqvwb\" (UniqueName: \"kubernetes.io/projected/7a72377e-a621-4ebb-b31a-7f405b218eb6-kube-api-access-nqvwb\") pod \"control-plane-machine-set-operator-78cbb6b69f-69wt2\" (UID: \"7a72377e-a621-4ebb-b31a-7f405b218eb6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-69wt2" Jan 23 16:18:57 crc kubenswrapper[4718]: E0123 16:18:57.341825 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:18:57.8417455 +0000 UTC m=+138.988987491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.341878 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a2118990-95a4-4a61-8c6a-3a72bdea8642-mountpoint-dir\") pod \"csi-hostpathplugin-ps7vw\" (UID: \"a2118990-95a4-4a61-8c6a-3a72bdea8642\") " pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.342357 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/045e9b38-2543-4b98-919f-55227f2094a9-trusted-ca\") pod \"console-operator-58897d9998-5p7bb\" (UID: \"045e9b38-2543-4b98-919f-55227f2094a9\") " pod="openshift-console-operator/console-operator-58897d9998-5p7bb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.342440 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a2118990-95a4-4a61-8c6a-3a72bdea8642-csi-data-dir\") pod \"csi-hostpathplugin-ps7vw\" (UID: \"a2118990-95a4-4a61-8c6a-3a72bdea8642\") " pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.342562 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/86dfe814-b236-4d53-bb9f-8974dc942f62-auth-proxy-config\") pod \"machine-approver-56656f9798-ptr79\" (UID: \"86dfe814-b236-4d53-bb9f-8974dc942f62\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.342579 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbd71c65-e596-40fe-8f5a-86d849c44b24-config-volume\") pod \"dns-default-n58ck\" (UID: \"bbd71c65-e596-40fe-8f5a-86d849c44b24\") " pod="openshift-dns/dns-default-n58ck" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.343361 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/05b88402-e2e8-4f32-a75e-1f434da51313-auth-proxy-config\") pod \"machine-config-operator-74547568cd-th59x\" (UID: \"05b88402-e2e8-4f32-a75e-1f434da51313\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-th59x" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.343781 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a2118990-95a4-4a61-8c6a-3a72bdea8642-plugins-dir\") pod \"csi-hostpathplugin-ps7vw\" (UID: \"a2118990-95a4-4a61-8c6a-3a72bdea8642\") " pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.343813 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bbd71c65-e596-40fe-8f5a-86d849c44b24-metrics-tls\") pod \"dns-default-n58ck\" (UID: \"bbd71c65-e596-40fe-8f5a-86d849c44b24\") " pod="openshift-dns/dns-default-n58ck" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.343893 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a2118990-95a4-4a61-8c6a-3a72bdea8642-registration-dir\") pod \"csi-hostpathplugin-ps7vw\" (UID: \"a2118990-95a4-4a61-8c6a-3a72bdea8642\") " pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.344526 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/045e9b38-2543-4b98-919f-55227f2094a9-config\") pod \"console-operator-58897d9998-5p7bb\" (UID: \"045e9b38-2543-4b98-919f-55227f2094a9\") " pod="openshift-console-operator/console-operator-58897d9998-5p7bb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.345030 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de3fd37-4875-4504-949f-0c5257019029-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lplkt\" (UID: \"0de3fd37-4875-4504-949f-0c5257019029\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lplkt" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.345618 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ec99fe0-df93-4747-917e-ed13e27f917f-srv-cert\") pod \"olm-operator-6b444d44fb-ktp9d\" (UID: \"6ec99fe0-df93-4747-917e-ed13e27f917f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.347111 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/045e9b38-2543-4b98-919f-55227f2094a9-serving-cert\") pod \"console-operator-58897d9998-5p7bb\" (UID: \"045e9b38-2543-4b98-919f-55227f2094a9\") " pod="openshift-console-operator/console-operator-58897d9998-5p7bb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.347328 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/86dfe814-b236-4d53-bb9f-8974dc942f62-machine-approver-tls\") pod \"machine-approver-56656f9798-ptr79\" (UID: \"86dfe814-b236-4d53-bb9f-8974dc942f62\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.347622 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/34588296-9ec4-4018-ab87-cfbec5d33d98-proxy-tls\") pod \"machine-config-controller-84d6567774-2fc5k\" (UID: \"34588296-9ec4-4018-ab87-cfbec5d33d98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fc5k" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.348221 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/01f81b66-e3c3-411c-9f5a-db54b3a3aa1d-profile-collector-cert\") pod \"catalog-operator-68c6474976-j57jq\" (UID: \"01f81b66-e3c3-411c-9f5a-db54b3a3aa1d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.348920 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e125f017-7b6d-4a50-abf2-9b0b200b1ccb-certs\") pod \"machine-config-server-5h4nn\" (UID: \"e125f017-7b6d-4a50-abf2-9b0b200b1ccb\") " pod="openshift-machine-config-operator/machine-config-server-5h4nn" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.349157 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/05b88402-e2e8-4f32-a75e-1f434da51313-proxy-tls\") pod \"machine-config-operator-74547568cd-th59x\" (UID: \"05b88402-e2e8-4f32-a75e-1f434da51313\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-th59x" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.349405 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0de3fd37-4875-4504-949f-0c5257019029-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lplkt\" (UID: \"0de3fd37-4875-4504-949f-0c5257019029\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lplkt" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.349488 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e125f017-7b6d-4a50-abf2-9b0b200b1ccb-node-bootstrap-token\") pod \"machine-config-server-5h4nn\" (UID: \"e125f017-7b6d-4a50-abf2-9b0b200b1ccb\") " pod="openshift-machine-config-operator/machine-config-server-5h4nn" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.349806 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6ec99fe0-df93-4747-917e-ed13e27f917f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ktp9d\" (UID: \"6ec99fe0-df93-4747-917e-ed13e27f917f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.352493 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b1a463f-85bc-4503-bb98-0540b914397c-cert\") pod \"ingress-canary-lfmcg\" (UID: \"1b1a463f-85bc-4503-bb98-0540b914397c\") " pod="openshift-ingress-canary/ingress-canary-lfmcg" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.359912 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/01f81b66-e3c3-411c-9f5a-db54b3a3aa1d-srv-cert\") pod \"catalog-operator-68c6474976-j57jq\" (UID: \"01f81b66-e3c3-411c-9f5a-db54b3a3aa1d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.360567 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ss8qc"] Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.364081 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a485827-90f5-4846-a975-b61eefef257f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-stp9k\" (UID: \"1a485827-90f5-4846-a975-b61eefef257f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-stp9k" Jan 23 16:18:57 crc kubenswrapper[4718]: W0123 16:18:57.367207 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ededf03_fe00_4583_b2ee_ef2a3f301f79.slice/crio-2d3e46e1b70bd2e90335a54689aed745579f6a150e707c604843434a92b7f77f WatchSource:0}: Error finding container 2d3e46e1b70bd2e90335a54689aed745579f6a150e707c604843434a92b7f77f: Status 404 returned error can't find the container with id 2d3e46e1b70bd2e90335a54689aed745579f6a150e707c604843434a92b7f77f Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.377692 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-69wt2" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.383645 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk4bf\" (UniqueName: \"kubernetes.io/projected/3d8b7f97-e18e-49a9-a3a2-cfd01da217a8-kube-api-access-tk4bf\") pod \"package-server-manager-789f6589d5-9bnnc\" (UID: \"3d8b7f97-e18e-49a9-a3a2-cfd01da217a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bnnc" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.391903 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-stp9k" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.392502 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-f6czp"] Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.408746 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sg5q\" (UniqueName: \"kubernetes.io/projected/74655295-4c96-4870-b700-b98b7a1e176e-kube-api-access-4sg5q\") pod \"collect-profiles-29486415-hw59c\" (UID: \"74655295-4c96-4870-b700-b98b7a1e176e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.414189 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb"] Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.435084 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vppcb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.441556 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw5kd\" (UniqueName: \"kubernetes.io/projected/3d9f40b2-a707-4205-987b-862d9ecc5c22-kube-api-access-kw5kd\") pod \"service-ca-operator-777779d784-pq76l\" (UID: \"3d9f40b2-a707-4205-987b-862d9ecc5c22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pq76l" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.441963 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: E0123 16:18:57.442400 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:18:57.942379158 +0000 UTC m=+139.089621149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.443904 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.459389 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxskh\" (UniqueName: \"kubernetes.io/projected/8f83a3e0-3cbf-4c84-b243-7501b6cb6a8a-kube-api-access-rxskh\") pod \"dns-operator-744455d44c-skvf4\" (UID: \"8f83a3e0-3cbf-4c84-b243-7501b6cb6a8a\") " pod="openshift-dns-operator/dns-operator-744455d44c-skvf4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.481194 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pq76l" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.499613 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/366c0aee-b870-49b2-8500-06f6529c270c-bound-sa-token\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.514064 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpls9\" (UniqueName: \"kubernetes.io/projected/44ee7633-a090-4fc2-80ce-2e4fe0ddcad9-kube-api-access-jpls9\") pod \"service-ca-9c57cc56f-4kts8\" (UID: \"44ee7633-a090-4fc2-80ce-2e4fe0ddcad9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4kts8" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.514156 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfsc2"] Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.518834 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l58kb"] Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.533583 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qwldq"] Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.536131 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9p5vn"] Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.542809 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:57 crc kubenswrapper[4718]: E0123 16:18:57.543018 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:18:58.042978876 +0000 UTC m=+139.190220887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.543302 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: E0123 16:18:57.543694 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:18:58.043683774 +0000 UTC m=+139.190925775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.553596 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvxzm\" (UniqueName: \"kubernetes.io/projected/688723bd-e83c-43b0-b21a-83cf348544bd-kube-api-access-jvxzm\") pod \"packageserver-d55dfcdfc-4btm4\" (UID: \"688723bd-e83c-43b0-b21a-83cf348544bd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.562976 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/895467b8-6fd4-4822-be4a-c6576d88b855-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d4557\" (UID: \"895467b8-6fd4-4822-be4a-c6576d88b855\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d4557" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.585238 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsxtb\" (UniqueName: \"kubernetes.io/projected/2d31ba73-9659-4b08-bd23-26a4f51835bf-kube-api-access-rsxtb\") pod \"marketplace-operator-79b997595-p6xg6\" (UID: \"2d31ba73-9659-4b08-bd23-26a4f51835bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.617080 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d63f14d-4ca8-40ca-bf45-52803a21c4fb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2m4vg\" (UID: \"9d63f14d-4ca8-40ca-bf45-52803a21c4fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2m4vg" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.634292 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4zwz\" (UniqueName: \"kubernetes.io/projected/14019d5a-595b-46f5-98f0-bed12ff9ab9f-kube-api-access-t4zwz\") pod \"downloads-7954f5f757-rlqmc\" (UID: \"14019d5a-595b-46f5-98f0-bed12ff9ab9f\") " pod="openshift-console/downloads-7954f5f757-rlqmc" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.644709 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:57 crc kubenswrapper[4718]: E0123 16:18:57.644984 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:18:58.144946139 +0000 UTC m=+139.292188140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.645090 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: E0123 16:18:57.645758 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:18:58.145740929 +0000 UTC m=+139.292982930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.654171 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-788dq\" (UniqueName: \"kubernetes.io/projected/0de3fd37-4875-4504-949f-0c5257019029-kube-api-access-788dq\") pod \"kube-storage-version-migrator-operator-b67b599dd-lplkt\" (UID: \"0de3fd37-4875-4504-949f-0c5257019029\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lplkt" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.659585 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hrs87"] Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.665578 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42svp\" (UniqueName: \"kubernetes.io/projected/a2118990-95a4-4a61-8c6a-3a72bdea8642-kube-api-access-42svp\") pod \"csi-hostpathplugin-ps7vw\" (UID: \"a2118990-95a4-4a61-8c6a-3a72bdea8642\") " pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.673084 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bnnc" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.683537 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vqvbg"] Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.688183 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d4r4\" (UniqueName: \"kubernetes.io/projected/6ec99fe0-df93-4747-917e-ed13e27f917f-kube-api-access-4d4r4\") pod \"olm-operator-6b444d44fb-ktp9d\" (UID: \"6ec99fe0-df93-4747-917e-ed13e27f917f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.703432 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv58t\" (UniqueName: \"kubernetes.io/projected/e125f017-7b6d-4a50-abf2-9b0b200b1ccb-kube-api-access-gv58t\") pod \"machine-config-server-5h4nn\" (UID: \"e125f017-7b6d-4a50-abf2-9b0b200b1ccb\") " pod="openshift-machine-config-operator/machine-config-server-5h4nn" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.707139 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-skvf4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.713662 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2m4vg" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.720194 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.726913 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbkkn\" (UniqueName: \"kubernetes.io/projected/c8067326-df8d-4bb1-8cb4-bca3e592a6b0-kube-api-access-rbkkn\") pod \"migrator-59844c95c7-mn2k5\" (UID: \"c8067326-df8d-4bb1-8cb4-bca3e592a6b0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mn2k5" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.728400 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d4557" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.746293 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:57 crc kubenswrapper[4718]: E0123 16:18:57.746656 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:18:58.246574383 +0000 UTC m=+139.393816404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.746778 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: E0123 16:18:57.747736 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:18:58.247703542 +0000 UTC m=+139.394945733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.750155 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xkc5\" (UniqueName: \"kubernetes.io/projected/86dfe814-b236-4d53-bb9f-8974dc942f62-kube-api-access-4xkc5\") pod \"machine-approver-56656f9798-ptr79\" (UID: \"86dfe814-b236-4d53-bb9f-8974dc942f62\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.751911 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4kts8" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.759035 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.763049 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhtrn\" (UniqueName: \"kubernetes.io/projected/bbd71c65-e596-40fe-8f5a-86d849c44b24-kube-api-access-dhtrn\") pod \"dns-default-n58ck\" (UID: \"bbd71c65-e596-40fe-8f5a-86d849c44b24\") " pod="openshift-dns/dns-default-n58ck" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.781982 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-rlqmc" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.789723 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mn2k5" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.791913 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh5kr\" (UniqueName: \"kubernetes.io/projected/01f81b66-e3c3-411c-9f5a-db54b3a3aa1d-kube-api-access-fh5kr\") pod \"catalog-operator-68c6474976-j57jq\" (UID: \"01f81b66-e3c3-411c-9f5a-db54b3a3aa1d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.796754 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lplkt" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.820688 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.833611 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.835201 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j85x6\" (UniqueName: \"kubernetes.io/projected/34588296-9ec4-4018-ab87-cfbec5d33d98-kube-api-access-j85x6\") pod \"machine-config-controller-84d6567774-2fc5k\" (UID: \"34588296-9ec4-4018-ab87-cfbec5d33d98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fc5k" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.841830 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.850131 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-n58ck" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.850183 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:57 crc kubenswrapper[4718]: E0123 16:18:57.850573 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:18:58.350528766 +0000 UTC m=+139.497770787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.850791 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: E0123 16:18:57.851465 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:18:58.35144634 +0000 UTC m=+139.498688371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.854330 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks7fh\" (UniqueName: \"kubernetes.io/projected/1b1a463f-85bc-4503-bb98-0540b914397c-kube-api-access-ks7fh\") pod \"ingress-canary-lfmcg\" (UID: \"1b1a463f-85bc-4503-bb98-0540b914397c\") " pod="openshift-ingress-canary/ingress-canary-lfmcg" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.870433 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.871004 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc228\" (UniqueName: \"kubernetes.io/projected/045e9b38-2543-4b98-919f-55227f2094a9-kube-api-access-bc228\") pod \"console-operator-58897d9998-5p7bb\" (UID: \"045e9b38-2543-4b98-919f-55227f2094a9\") " pod="openshift-console-operator/console-operator-58897d9998-5p7bb" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.878719 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5h4nn" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.886943 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lfmcg" Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.953189 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:57 crc kubenswrapper[4718]: E0123 16:18:57.953816 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:18:58.453777572 +0000 UTC m=+139.601019593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:57 crc kubenswrapper[4718]: I0123 16:18:57.953893 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:57 crc kubenswrapper[4718]: E0123 16:18:57.954507 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:18:58.454493161 +0000 UTC m=+139.601735192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.013229 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" event={"ID":"be7521bc-a519-439d-9ae3-4fb10368e494","Type":"ContainerStarted","Data":"cfe05ced058dad575d0269f7deb34142da9ef13dca740dbc1bdb3f1a6c5f4854"} Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.015500 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" event={"ID":"e179a011-4637-42d5-9679-e910440d25ac","Type":"ContainerStarted","Data":"a468f5f062d7543194a6ad8a0a755552eecff2df56a84880e2b1ca5c53525117"} Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.017275 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" event={"ID":"b2c947b7-81d9-4041-9b78-668f44427eb9","Type":"ContainerStarted","Data":"13338b77998472746122d527c1d16e5d56bf82795285e99322ee47559a6e6a03"} Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.018839 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" event={"ID":"5ededf03-fe00-4583-b2ee-ef2a3f301f79","Type":"ContainerStarted","Data":"2d3e46e1b70bd2e90335a54689aed745579f6a150e707c604843434a92b7f77f"} Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.026562 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7m6gn" event={"ID":"ae4b9e6a-5c1c-4cb5-899e-6c451c179b77","Type":"ContainerStarted","Data":"5c410cb401bc2f39b7732f4ea590739b99b99d89d31ae781c9c87efd282a4a4f"} Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.028158 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lt6zb" event={"ID":"0893e7ff-b1d9-4227-ae44-a873d8355a70","Type":"ContainerStarted","Data":"9bad95bc040606a183f3b3dc374948829e974261cb26d15863d242647571f23e"} Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.055561 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:58 crc kubenswrapper[4718]: E0123 16:18:58.056100 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:18:58.556055623 +0000 UTC m=+139.703297644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.112745 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5p7bb" Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.126688 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fc5k" Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.158168 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:58 crc kubenswrapper[4718]: E0123 16:18:58.158757 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:18:58.658727393 +0000 UTC m=+139.805969414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.259081 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:58 crc kubenswrapper[4718]: E0123 16:18:58.259613 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:18:58.759585598 +0000 UTC m=+139.906827619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.360894 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:58 crc kubenswrapper[4718]: E0123 16:18:58.361590 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:18:58.861546281 +0000 UTC m=+140.008788322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.414369 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lzhf\" (UniqueName: \"kubernetes.io/projected/05b88402-e2e8-4f32-a75e-1f434da51313-kube-api-access-7lzhf\") pod \"machine-config-operator-74547568cd-th59x\" (UID: \"05b88402-e2e8-4f32-a75e-1f434da51313\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-th59x" Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.422535 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0b8a168-87d3-47a0-8527-6252cf9743df-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2v8tf\" (UID: \"b0b8a168-87d3-47a0-8527-6252cf9743df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf" Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.428138 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9f4j\" (UniqueName: \"kubernetes.io/projected/6e51ecf0-a72a-461c-a669-8bce49b39003-kube-api-access-f9f4j\") pod \"router-default-5444994796-hzrpf\" (UID: \"6e51ecf0-a72a-461c-a669-8bce49b39003\") " pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:18:58 crc kubenswrapper[4718]: W0123 16:18:58.436191 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c5d01e6_e826_4f29_9160_ab28e19020b9.slice/crio-931bdf447453122be169af777662b7525bba9323e3c2be24088b262a5cc84221 WatchSource:0}: Error finding container 931bdf447453122be169af777662b7525bba9323e3c2be24088b262a5cc84221: Status 404 returned error can't find the container with id 931bdf447453122be169af777662b7525bba9323e3c2be24088b262a5cc84221 Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.461963 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:58 crc kubenswrapper[4718]: E0123 16:18:58.462231 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:18:58.962180539 +0000 UTC m=+140.109422560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.462380 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:58 crc kubenswrapper[4718]: E0123 16:18:58.463140 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:18:58.963122963 +0000 UTC m=+140.110364994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.561516 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.563479 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:58 crc kubenswrapper[4718]: E0123 16:18:58.563885 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:18:59.063855995 +0000 UTC m=+140.211097996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.565049 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:58 crc kubenswrapper[4718]: E0123 16:18:58.570982 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:18:59.070957466 +0000 UTC m=+140.218199457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.669341 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf" Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.672765 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:58 crc kubenswrapper[4718]: E0123 16:18:58.673068 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:18:59.173036811 +0000 UTC m=+140.320278802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.673168 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:58 crc kubenswrapper[4718]: E0123 16:18:58.674068 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:18:59.174043207 +0000 UTC m=+140.321285198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.704441 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-th59x" Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.774164 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:58 crc kubenswrapper[4718]: E0123 16:18:58.774529 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:18:59.274514301 +0000 UTC m=+140.421756292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.876386 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:58 crc kubenswrapper[4718]: E0123 16:18:58.877094 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:18:59.37707471 +0000 UTC m=+140.524316701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.894996 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vppcb"] Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.977503 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:58 crc kubenswrapper[4718]: E0123 16:18:58.977862 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:18:59.477847732 +0000 UTC m=+140.625089723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:58 crc kubenswrapper[4718]: I0123 16:18:58.988447 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-n58ck"] Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.051564 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" event={"ID":"be7521bc-a519-439d-9ae3-4fb10368e494","Type":"ContainerStarted","Data":"986b796061db417c4c77de7c3457d2481dd84c0101ff9316c2f9e6f9ed8e2677"} Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.052098 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.054490 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l58kb" event={"ID":"31032e25-5c02-40f3-8058-47ada861d728","Type":"ContainerStarted","Data":"0f09e5d90726414b3b3ee7971636040aba78ccf7c44afa29a0d0a8681575beae"} Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.056692 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5vn" event={"ID":"8ed9dcbf-5502-4797-9b65-ff900aa065d8","Type":"ContainerStarted","Data":"8d24de8a4552b86809808b8104468eaa6612f08aed1c7be813e9c607b3bfe96c"} Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.059464 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfsc2" event={"ID":"c015fcec-cc64-4d3c-bddd-df7d887d0ea3","Type":"ContainerStarted","Data":"248943ffaaf891dcd1625cc95618042963001633bc66baf8e5e55890df593e31"} Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.060607 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vppcb" event={"ID":"6f3f6151-c368-4b06-a85e-a8b9d3969ca0","Type":"ContainerStarted","Data":"6a3d8d97c8099f4154e908cd2636e1808c4a0e3eb3f8d0e9e8e323e0e0e19366"} Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.061867 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb" event={"ID":"86444525-e481-4e9f-9a52-471432286641","Type":"ContainerStarted","Data":"a8ff31bb6d4468fa907a1418bb8d0830ff1c1abfd313f5bb624feda010df7d60"} Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.063226 4718 generic.go:334] "Generic (PLEG): container finished" podID="e179a011-4637-42d5-9679-e910440d25ac" containerID="b45d4639b2f0bfe6167efa0bdb4d14b7c92b8406db5a3fa5af44b13dad838213" exitCode=0 Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.063274 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" event={"ID":"e179a011-4637-42d5-9679-e910440d25ac","Type":"ContainerDied","Data":"b45d4639b2f0bfe6167efa0bdb4d14b7c92b8406db5a3fa5af44b13dad838213"} Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.070001 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" event={"ID":"2f771629-4dda-413b-9dc3-75bf6c13f310","Type":"ContainerStarted","Data":"1220aab4a11c5e0afbdb67aa9ecfa294edf6930e5951a9b9d4ec50abb7a320ff"} Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.072111 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.073141 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qwldq" event={"ID":"a29dbc90-997d-4f83-8151-1cfcca661070","Type":"ContainerStarted","Data":"1d4da62da5eaf971665a5a7b56093c6e9b6022ec883dedb6c07453041d45bfe1"} Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.082447 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:59 crc kubenswrapper[4718]: E0123 16:18:59.083227 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:18:59.583209951 +0000 UTC m=+140.730451942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.085121 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" event={"ID":"c96e05b0-db9f-4670-839d-f15b53eeffc6","Type":"ContainerStarted","Data":"e01a81b3ceec86cba5572ec3be38cc184c5034abe982da49a2365de12bea8647"} Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.128499 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" event={"ID":"b2c947b7-81d9-4041-9b78-668f44427eb9","Type":"ContainerStarted","Data":"0aaf098ba2d1be22e1b7c2aeb8663fc00fe645ebe0d5c2f5c30d1a5a7167b976"} Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.129680 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.183080 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:59 crc kubenswrapper[4718]: E0123 16:18:59.184339 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:18:59.684322893 +0000 UTC m=+140.831564884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.200985 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lt6zb" event={"ID":"0893e7ff-b1d9-4227-ae44-a873d8355a70","Type":"ContainerStarted","Data":"eb079f66ddc0992d9bfe2a7a428705819d273ba55a945c90a162c673e93946f7"} Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.201079 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.202395 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" event={"ID":"0c5d01e6-e826-4f29-9160-ab28e19020b9","Type":"ContainerStarted","Data":"931bdf447453122be169af777662b7525bba9323e3c2be24088b262a5cc84221"} Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.284656 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:59 crc kubenswrapper[4718]: E0123 16:18:59.291262 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:18:59.791240452 +0000 UTC m=+140.938482643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.329923 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" podStartSLOduration=121.329904608 podStartE2EDuration="2m1.329904608s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:18:59.329407635 +0000 UTC m=+140.476649626" watchObservedRunningTime="2026-01-23 16:18:59.329904608 +0000 UTC m=+140.477146599" Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.396710 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:59 crc kubenswrapper[4718]: E0123 16:18:59.397016 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:18:59.89700108 +0000 UTC m=+141.044243071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.408365 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" event={"ID":"5ededf03-fe00-4583-b2ee-ef2a3f301f79","Type":"ContainerStarted","Data":"7806aca0a9218ab66c5d18a48ab534ffd054eb504243be20c237000af66b85b5"} Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.422222 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7m6gn" event={"ID":"ae4b9e6a-5c1c-4cb5-899e-6c451c179b77","Type":"ContainerStarted","Data":"842c5b862d86dc7ea4e32c55c63ea98933bf18bdafb9c341f96b6f15d7d9114d"} Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.446016 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" podStartSLOduration=120.445990332 podStartE2EDuration="2m0.445990332s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:18:59.397146174 +0000 UTC m=+140.544388165" watchObservedRunningTime="2026-01-23 16:18:59.445990332 +0000 UTC m=+140.593232333" Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.471994 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-lt6zb" podStartSLOduration=121.471976735 podStartE2EDuration="2m1.471976735s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:18:59.470366854 +0000 UTC m=+140.617608845" watchObservedRunningTime="2026-01-23 16:18:59.471976735 +0000 UTC m=+140.619218736" Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.517977 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:59 crc kubenswrapper[4718]: E0123 16:18:59.530594 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:00.0305678 +0000 UTC m=+141.177809791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.531376 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" podStartSLOduration=121.53134946 podStartE2EDuration="2m1.53134946s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:18:59.496440859 +0000 UTC m=+140.643682860" watchObservedRunningTime="2026-01-23 16:18:59.53134946 +0000 UTC m=+140.678591451" Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.571098 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7m6gn" podStartSLOduration=121.571067404 podStartE2EDuration="2m1.571067404s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:18:59.5204026 +0000 UTC m=+140.667644601" watchObservedRunningTime="2026-01-23 16:18:59.571067404 +0000 UTC m=+140.718309395" Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.621599 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:59 crc kubenswrapper[4718]: E0123 16:18:59.622168 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:00.122145218 +0000 UTC m=+141.269387209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.723986 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:59 crc kubenswrapper[4718]: E0123 16:18:59.724588 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:00.224577232 +0000 UTC m=+141.371819223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.825408 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:18:59 crc kubenswrapper[4718]: E0123 16:18:59.826101 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:00.326072173 +0000 UTC m=+141.473314164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:18:59 crc kubenswrapper[4718]: I0123 16:18:59.926798 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:18:59 crc kubenswrapper[4718]: E0123 16:18:59.927132 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:00.427121183 +0000 UTC m=+141.574363184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.028768 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:00 crc kubenswrapper[4718]: E0123 16:19:00.029193 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:00.529168947 +0000 UTC m=+141.676410938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.129661 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:00 crc kubenswrapper[4718]: E0123 16:19:00.130279 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:00.630265678 +0000 UTC m=+141.777507659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.233561 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:00 crc kubenswrapper[4718]: E0123 16:19:00.233989 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:00.733936834 +0000 UTC m=+141.881178825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.234460 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:00 crc kubenswrapper[4718]: E0123 16:19:00.234930 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:00.73491562 +0000 UTC m=+141.882157611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.336056 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:00 crc kubenswrapper[4718]: E0123 16:19:00.336332 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:00.836280987 +0000 UTC m=+141.983522978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.336544 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:00 crc kubenswrapper[4718]: E0123 16:19:00.337027 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:00.837008025 +0000 UTC m=+141.984250016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.430655 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5h4nn" event={"ID":"e125f017-7b6d-4a50-abf2-9b0b200b1ccb","Type":"ContainerStarted","Data":"3d54fd58fe4a1d2ddb98824fa7d22c554d9a726f47aaa1aca74786257110163a"} Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.430964 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5h4nn" event={"ID":"e125f017-7b6d-4a50-abf2-9b0b200b1ccb","Type":"ContainerStarted","Data":"ec1fe5567e5ee116a1b8d69dde840b0a59a688c92f2cc1c46173a0f12d445053"} Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.432091 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb" event={"ID":"86444525-e481-4e9f-9a52-471432286641","Type":"ContainerStarted","Data":"61daea44b853607b81e8b07d0fc47c958906bf7c17a1c5cd702e2d0b4957f88c"} Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.433716 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" event={"ID":"2f771629-4dda-413b-9dc3-75bf6c13f310","Type":"ContainerStarted","Data":"9d2cc5dbdd61a72a0ef0ed306696fb3a2fd59a571ee76b0fae407db162c62356"} Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.435059 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n58ck" event={"ID":"bbd71c65-e596-40fe-8f5a-86d849c44b24","Type":"ContainerStarted","Data":"b3cb6a6306e779b536161e51877f352ef3a15eba0fa8b8012cfb49031f480250"} Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.435992 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hzrpf" event={"ID":"6e51ecf0-a72a-461c-a669-8bce49b39003","Type":"ContainerStarted","Data":"6614fde9144bf47064565689649df9cb0ac39f8951a85d99d61d53be3371926b"} Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.437034 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:00 crc kubenswrapper[4718]: E0123 16:19:00.438410 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:00.938393753 +0000 UTC m=+142.085635744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.442054 4718 generic.go:334] "Generic (PLEG): container finished" podID="0c5d01e6-e826-4f29-9160-ab28e19020b9" containerID="ed67e6e627fc95ffc4b633a677059b7e2ae9a5e3520c78c6a015387f152b4855" exitCode=0 Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.442210 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" event={"ID":"0c5d01e6-e826-4f29-9160-ab28e19020b9","Type":"ContainerDied","Data":"ed67e6e627fc95ffc4b633a677059b7e2ae9a5e3520c78c6a015387f152b4855"} Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.453270 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5vn" event={"ID":"8ed9dcbf-5502-4797-9b65-ff900aa065d8","Type":"ContainerStarted","Data":"274788c4195e3eb1fe84f59ad259160d170402b33898a47ba7557af75434bdfc"} Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.455677 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfsc2" event={"ID":"c015fcec-cc64-4d3c-bddd-df7d887d0ea3","Type":"ContainerStarted","Data":"802c5ab310802e1aa1cea25c791a3e0a4f395e5d5b8df76170da06dd2461ff6f"} Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.458782 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" event={"ID":"c96e05b0-db9f-4670-839d-f15b53eeffc6","Type":"ContainerStarted","Data":"007c5bd180ddb889d72d361e90496194e7cb8cd66029096efc9980027e109ce2"} Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.459886 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.464184 4718 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-hrs87 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" start-of-body= Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.464284 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" podUID="c96e05b0-db9f-4670-839d-f15b53eeffc6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.465375 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-5h4nn" podStartSLOduration=6.465359971 podStartE2EDuration="6.465359971s" podCreationTimestamp="2026-01-23 16:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:00.46377073 +0000 UTC m=+141.611012721" watchObservedRunningTime="2026-01-23 16:19:00.465359971 +0000 UTC m=+141.612601962" Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.466242 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l58kb" event={"ID":"31032e25-5c02-40f3-8058-47ada861d728","Type":"ContainerStarted","Data":"99d13d1d2f22157abf1b7fa305bf22f03f056873326c73c129f0dc7efd7c2f7b"} Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.471686 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79" event={"ID":"86dfe814-b236-4d53-bb9f-8974dc942f62","Type":"ContainerStarted","Data":"8320385bee7a772e1b195fdaf58efb57dbebfba5f830491ffab9b0f7c2ccd15f"} Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.491598 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-vqvbg" podStartSLOduration=121.49157781 podStartE2EDuration="2m1.49157781s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:00.490118004 +0000 UTC m=+141.637359995" watchObservedRunningTime="2026-01-23 16:19:00.49157781 +0000 UTC m=+141.638819801" Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.518420 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" podStartSLOduration=122.518401735 podStartE2EDuration="2m2.518401735s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:00.517936004 +0000 UTC m=+141.665177985" watchObservedRunningTime="2026-01-23 16:19:00.518401735 +0000 UTC m=+141.665643726" Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.541355 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:00 crc kubenswrapper[4718]: E0123 16:19:00.543692 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:01.043678781 +0000 UTC m=+142.190920772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.544990 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c8pmb" podStartSLOduration=121.544977973 podStartE2EDuration="2m1.544977973s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:00.53702797 +0000 UTC m=+141.684269961" watchObservedRunningTime="2026-01-23 16:19:00.544977973 +0000 UTC m=+141.692219954" Jan 23 16:19:00 crc kubenswrapper[4718]: E0123 16:19:00.557976 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c5d01e6_e826_4f29_9160_ab28e19020b9.slice/crio-ed67e6e627fc95ffc4b633a677059b7e2ae9a5e3520c78c6a015387f152b4855.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c5d01e6_e826_4f29_9160_ab28e19020b9.slice/crio-conmon-ed67e6e627fc95ffc4b633a677059b7e2ae9a5e3520c78c6a015387f152b4855.scope\": RecentStats: unable to find data in memory cache]" Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.603903 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l58kb" podStartSLOduration=122.603878807 podStartE2EDuration="2m2.603878807s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:00.602776049 +0000 UTC m=+141.750018050" watchObservedRunningTime="2026-01-23 16:19:00.603878807 +0000 UTC m=+141.751120788" Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.644433 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:00 crc kubenswrapper[4718]: E0123 16:19:00.645196 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:01.145176501 +0000 UTC m=+142.292418492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.747729 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:00 crc kubenswrapper[4718]: E0123 16:19:00.748194 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:01.248178211 +0000 UTC m=+142.395420202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.849485 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:00 crc kubenswrapper[4718]: E0123 16:19:00.850044 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:01.35000948 +0000 UTC m=+142.497251471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.952371 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:00 crc kubenswrapper[4718]: E0123 16:19:00.952775 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:01.452763833 +0000 UTC m=+142.600005824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.977999 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lfmcg"] Jan 23 16:19:00 crc kubenswrapper[4718]: I0123 16:19:00.989274 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-69wt2"] Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.059157 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:01 crc kubenswrapper[4718]: E0123 16:19:01.059673 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:01.559652211 +0000 UTC m=+142.706894202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.161071 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:01 crc kubenswrapper[4718]: E0123 16:19:01.161503 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:01.66148871 +0000 UTC m=+142.808730701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.183981 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4"] Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.230092 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2m4vg"] Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.250237 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mn2k5"] Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.262791 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:01 crc kubenswrapper[4718]: E0123 16:19:01.263372 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:01.76333099 +0000 UTC m=+142.910573141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.365389 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:01 crc kubenswrapper[4718]: E0123 16:19:01.366153 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:01.866140664 +0000 UTC m=+143.013382645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.466358 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:01 crc kubenswrapper[4718]: E0123 16:19:01.466749 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:01.966732932 +0000 UTC m=+143.113974923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.511933 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfsc2" event={"ID":"c015fcec-cc64-4d3c-bddd-df7d887d0ea3","Type":"ContainerStarted","Data":"bfb5635621016bbd1aa5a6a4f2068695b7078421ee44639220a70e87fde30ab4"} Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.529291 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-69wt2" event={"ID":"7a72377e-a621-4ebb-b31a-7f405b218eb6","Type":"ContainerStarted","Data":"b2ef46165d5de01113ba21cf70b37f2979952761ccc3e48ba455d32bce4e00df"} Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.534752 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n58ck" event={"ID":"bbd71c65-e596-40fe-8f5a-86d849c44b24","Type":"ContainerStarted","Data":"7be62c5b4fe299bfe9af653fd47260ded208524b555b392db95694366c4a3f54"} Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.536700 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" event={"ID":"0c5d01e6-e826-4f29-9160-ab28e19020b9","Type":"ContainerStarted","Data":"cd50dc5a3da4d2ad34f249b9df75c3c43258f1445c0efecde01676db81aa8078"} Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.537394 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.541514 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mn2k5" event={"ID":"c8067326-df8d-4bb1-8cb4-bca3e592a6b0","Type":"ContainerStarted","Data":"a3f0fa97c56c868e7dc266a3079cd3ce8988ce0da4111f70a84adeb974a637d3"} Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.557985 4718 generic.go:334] "Generic (PLEG): container finished" podID="a29dbc90-997d-4f83-8151-1cfcca661070" containerID="e807924648764fd20ed233ac0f4f24a72c1d3c6c564a4306b6bf23018327f188" exitCode=0 Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.558142 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qwldq" event={"ID":"a29dbc90-997d-4f83-8151-1cfcca661070","Type":"ContainerDied","Data":"e807924648764fd20ed233ac0f4f24a72c1d3c6c564a4306b6bf23018327f188"} Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.569624 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:01 crc kubenswrapper[4718]: E0123 16:19:01.574226 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:02.074206476 +0000 UTC m=+143.221448467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.574800 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" event={"ID":"688723bd-e83c-43b0-b21a-83cf348544bd","Type":"ContainerStarted","Data":"9748fd33bd90d514c3af9c0c139e0fc21ed6ef9219edc4dd0043be36e25bf9b6"} Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.604736 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfsc2" podStartSLOduration=123.604697894 podStartE2EDuration="2m3.604697894s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:01.559343936 +0000 UTC m=+142.706585927" watchObservedRunningTime="2026-01-23 16:19:01.604697894 +0000 UTC m=+142.751939885" Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.605688 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" podStartSLOduration=123.605682559 podStartE2EDuration="2m3.605682559s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:01.603711518 +0000 UTC m=+142.750953509" watchObservedRunningTime="2026-01-23 16:19:01.605682559 +0000 UTC m=+142.752924550" Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.630735 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-rlqmc"] Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.652014 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4kts8"] Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.655118 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p6xg6"] Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.677590 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:01 crc kubenswrapper[4718]: E0123 16:19:01.678088 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:02.178064957 +0000 UTC m=+143.325306948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.684288 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5vn" event={"ID":"8ed9dcbf-5502-4797-9b65-ff900aa065d8","Type":"ContainerStarted","Data":"8b6ac9fb3f93836629f5daf244076d391189076c06ff5ed2be352ebd207fb6a9"} Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.694997 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-stp9k"] Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.702883 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq"] Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.704770 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hzrpf" event={"ID":"6e51ecf0-a72a-461c-a669-8bce49b39003","Type":"ContainerStarted","Data":"abd077764547aeb263f3ded968fe12b82b6ab6f3a263cf685781912b0e416865"} Jan 23 16:19:01 crc kubenswrapper[4718]: W0123 16:19:01.705419 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d31ba73_9659_4b08_bd23_26a4f51835bf.slice/crio-8e2af7444500b20942e79ae55146847d46a3f4025ec8b25c2962037a7169fb34 WatchSource:0}: Error finding container 8e2af7444500b20942e79ae55146847d46a3f4025ec8b25c2962037a7169fb34: Status 404 returned error can't find the container with id 8e2af7444500b20942e79ae55146847d46a3f4025ec8b25c2962037a7169fb34 Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.731674 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5vn" podStartSLOduration=122.731652754 podStartE2EDuration="2m2.731652754s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:01.722724577 +0000 UTC m=+142.869966568" watchObservedRunningTime="2026-01-23 16:19:01.731652754 +0000 UTC m=+142.878894745" Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.747750 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lplkt"] Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.773803 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hzrpf" podStartSLOduration=122.77377563 podStartE2EDuration="2m2.77377563s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:01.77301235 +0000 UTC m=+142.920254341" watchObservedRunningTime="2026-01-23 16:19:01.77377563 +0000 UTC m=+142.921017621" Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.791795 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:01 crc kubenswrapper[4718]: E0123 16:19:01.794647 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:02.294612791 +0000 UTC m=+143.441854782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.795499 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pq76l"] Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.796408 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lfmcg" event={"ID":"1b1a463f-85bc-4503-bb98-0540b914397c","Type":"ContainerStarted","Data":"775d30b6654be23df3e24782a3844010be1d78df501dbf02248fd15b7daf538c"} Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.796605 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lfmcg" event={"ID":"1b1a463f-85bc-4503-bb98-0540b914397c","Type":"ContainerStarted","Data":"b77cc8702f336ee907b685d5765c2f87ea79fbf45c8a583c79df99a2ba780ab9"} Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.808790 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d"] Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.814147 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ps7vw"] Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.817188 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" event={"ID":"e179a011-4637-42d5-9679-e910440d25ac","Type":"ContainerStarted","Data":"0058a1729b995e9b8c885d49407afa584d72f1d3c034429bb531b33c00809b2f"} Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.820364 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2m4vg" event={"ID":"9d63f14d-4ca8-40ca-bf45-52803a21c4fb","Type":"ContainerStarted","Data":"c90da0c492803c60bdfd8c6fd2539bb6547ee638ec01bb1692930ec5aeb4e2d4"} Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.823595 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vppcb" event={"ID":"6f3f6151-c368-4b06-a85e-a8b9d3969ca0","Type":"ContainerStarted","Data":"09b2d31ce3284a2736e1c5135a235ceb46aa2a1e98d6f33dd0d3f5ee9f9a701a"} Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.828839 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-lfmcg" podStartSLOduration=7.828828735 podStartE2EDuration="7.828828735s" podCreationTimestamp="2026-01-23 16:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:01.826442294 +0000 UTC m=+142.973684295" watchObservedRunningTime="2026-01-23 16:19:01.828828735 +0000 UTC m=+142.976070726" Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.844253 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79" event={"ID":"86dfe814-b236-4d53-bb9f-8974dc942f62","Type":"ContainerStarted","Data":"8126a2614423632940b2aa86db9527e05cc9d90587d3c00a501b6297ea2e7aa1"} Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.847120 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-skvf4"] Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.863570 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-th59x"] Jan 23 16:19:01 crc kubenswrapper[4718]: W0123 16:19:01.866944 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a485827_90f5_4846_a975_b61eefef257f.slice/crio-53c22e5ef85985b1000e0b73aef2e015a9ff398ef1f2984b1b608d69738f7244 WatchSource:0}: Error finding container 53c22e5ef85985b1000e0b73aef2e015a9ff398ef1f2984b1b608d69738f7244: Status 404 returned error can't find the container with id 53c22e5ef85985b1000e0b73aef2e015a9ff398ef1f2984b1b608d69738f7244 Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.877439 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5p7bb"] Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.886672 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" podStartSLOduration=122.88664748 podStartE2EDuration="2m2.88664748s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:01.886169098 +0000 UTC m=+143.033411109" watchObservedRunningTime="2026-01-23 16:19:01.88664748 +0000 UTC m=+143.033889471" Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.893016 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:01 crc kubenswrapper[4718]: E0123 16:19:01.894356 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:02.394339427 +0000 UTC m=+143.541581418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.908322 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.928490 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bnnc"] Jan 23 16:19:01 crc kubenswrapper[4718]: W0123 16:19:01.974328 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2118990_95a4_4a61_8c6a_3a72bdea8642.slice/crio-a04d3870d6b252e5ca91736a75957ad0b4afe627d5ca22e0f3dd7cdcab490da0 WatchSource:0}: Error finding container a04d3870d6b252e5ca91736a75957ad0b4afe627d5ca22e0f3dd7cdcab490da0: Status 404 returned error can't find the container with id a04d3870d6b252e5ca91736a75957ad0b4afe627d5ca22e0f3dd7cdcab490da0 Jan 23 16:19:01 crc kubenswrapper[4718]: I0123 16:19:01.995461 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:01 crc kubenswrapper[4718]: E0123 16:19:01.995844 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:02.495831748 +0000 UTC m=+143.643073739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.013182 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf"] Jan 23 16:19:02 crc kubenswrapper[4718]: W0123 16:19:02.019732 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f83a3e0_3cbf_4c84_b243_7501b6cb6a8a.slice/crio-0a2dbd766dd9310c80ee6841743b186e373eeaa17c216ca7eba4b7ff52cf1f62 WatchSource:0}: Error finding container 0a2dbd766dd9310c80ee6841743b186e373eeaa17c216ca7eba4b7ff52cf1f62: Status 404 returned error can't find the container with id 0a2dbd766dd9310c80ee6841743b186e373eeaa17c216ca7eba4b7ff52cf1f62 Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.020949 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c"] Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.084888 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2fc5k"] Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.096064 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:02 crc kubenswrapper[4718]: E0123 16:19:02.096444 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:02.596427765 +0000 UTC m=+143.743669756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.101949 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d4557"] Jan 23 16:19:02 crc kubenswrapper[4718]: W0123 16:19:02.127368 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74655295_4c96_4870_b700_b98b7a1e176e.slice/crio-ce8fd23e73f585af961ef89af02f38319d9a7dabb7c27b67c3e793d65b8af42e WatchSource:0}: Error finding container ce8fd23e73f585af961ef89af02f38319d9a7dabb7c27b67c3e793d65b8af42e: Status 404 returned error can't find the container with id ce8fd23e73f585af961ef89af02f38319d9a7dabb7c27b67c3e793d65b8af42e Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.199347 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:02 crc kubenswrapper[4718]: E0123 16:19:02.199723 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:02.699707222 +0000 UTC m=+143.846949203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.300417 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:02 crc kubenswrapper[4718]: E0123 16:19:02.300936 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:02.800920185 +0000 UTC m=+143.948162176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.402693 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:02 crc kubenswrapper[4718]: E0123 16:19:02.403386 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:02.903374451 +0000 UTC m=+144.050616442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.504540 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:02 crc kubenswrapper[4718]: E0123 16:19:02.504727 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:03.004700757 +0000 UTC m=+144.151942748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.504846 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:02 crc kubenswrapper[4718]: E0123 16:19:02.505148 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:03.005140528 +0000 UTC m=+144.152382519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.562281 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.573606 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:02 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:02 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:02 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.573692 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.606202 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:02 crc kubenswrapper[4718]: E0123 16:19:02.606568 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:03.106552756 +0000 UTC m=+144.253794757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.708407 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:02 crc kubenswrapper[4718]: E0123 16:19:02.709622 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:03.209442423 +0000 UTC m=+144.356684414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.812096 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:02 crc kubenswrapper[4718]: E0123 16:19:02.812827 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:03.312798151 +0000 UTC m=+144.460040142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.914047 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:02 crc kubenswrapper[4718]: E0123 16:19:02.914397 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:03.414385664 +0000 UTC m=+144.561627645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.984832 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pq76l" event={"ID":"3d9f40b2-a707-4205-987b-862d9ecc5c22","Type":"ContainerStarted","Data":"967d7d35e06396d1cec27ce7cd1fa641cb23496648b18da15ee4e711eb89f748"} Jan 23 16:19:02 crc kubenswrapper[4718]: I0123 16:19:02.985244 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pq76l" event={"ID":"3d9f40b2-a707-4205-987b-862d9ecc5c22","Type":"ContainerStarted","Data":"2371c01b6f53ca400b08742bb3a66275daba148a969d17b512e0668084ddefe4"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.016682 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:03 crc kubenswrapper[4718]: E0123 16:19:03.016887 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:03.51684752 +0000 UTC m=+144.664089511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.017066 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:03 crc kubenswrapper[4718]: E0123 16:19:03.018067 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:03.518053531 +0000 UTC m=+144.665295522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.054416 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-stp9k" event={"ID":"1a485827-90f5-4846-a975-b61eefef257f","Type":"ContainerStarted","Data":"53c22e5ef85985b1000e0b73aef2e015a9ff398ef1f2984b1b608d69738f7244"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.064431 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79" event={"ID":"86dfe814-b236-4d53-bb9f-8974dc942f62","Type":"ContainerStarted","Data":"9fad017f2f65556f098ad4848d2bf5013f52701d52e19718231746777882c728"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.083549 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d4557" event={"ID":"895467b8-6fd4-4822-be4a-c6576d88b855","Type":"ContainerStarted","Data":"a5546f7b6fe106cfede527f9f7db44e0410a953b1bf3309ba5f3431849232317"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.090880 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" event={"ID":"a2118990-95a4-4a61-8c6a-3a72bdea8642","Type":"ContainerStarted","Data":"a04d3870d6b252e5ca91736a75957ad0b4afe627d5ca22e0f3dd7cdcab490da0"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.112326 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pq76l" podStartSLOduration=124.112306587 podStartE2EDuration="2m4.112306587s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:03.013038622 +0000 UTC m=+144.160280613" watchObservedRunningTime="2026-01-23 16:19:03.112306587 +0000 UTC m=+144.259548578" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.152697 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lplkt" event={"ID":"0de3fd37-4875-4504-949f-0c5257019029","Type":"ContainerStarted","Data":"69a2e1dbd0d8e929cd26f06e3741280b975bd4a83d4d21113ddfd0dd43d6b98e"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.153153 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lplkt" event={"ID":"0de3fd37-4875-4504-949f-0c5257019029","Type":"ContainerStarted","Data":"c05d78bfb0b85d93fd8701606cfc455f0e569b7de8c7b730d0af49bb35f5cd8e"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.156594 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:03 crc kubenswrapper[4718]: E0123 16:19:03.156791 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:03.656770302 +0000 UTC m=+144.804012293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.157811 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.173807 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptr79" podStartSLOduration=125.173783696 podStartE2EDuration="2m5.173783696s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:03.154544235 +0000 UTC m=+144.301786226" watchObservedRunningTime="2026-01-23 16:19:03.173783696 +0000 UTC m=+144.321025687" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.179107 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lplkt" podStartSLOduration=124.179097932 podStartE2EDuration="2m4.179097932s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:03.175219932 +0000 UTC m=+144.322461923" watchObservedRunningTime="2026-01-23 16:19:03.179097932 +0000 UTC m=+144.326339913" Jan 23 16:19:03 crc kubenswrapper[4718]: E0123 16:19:03.181813 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:03.659600264 +0000 UTC m=+144.806842255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.222098 4718 csr.go:261] certificate signing request csr-fp9tn is approved, waiting to be issued Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.226570 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2m4vg" event={"ID":"9d63f14d-4ca8-40ca-bf45-52803a21c4fb","Type":"ContainerStarted","Data":"16d66950fe7cf2ae90b473886c29fb4c60b03edc0731eebd5ff5cb1642d114b2"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.227034 4718 csr.go:257] certificate signing request csr-fp9tn is issued Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.260902 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:03 crc kubenswrapper[4718]: E0123 16:19:03.261863 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:03.761825743 +0000 UTC m=+144.909067734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.272065 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bnnc" event={"ID":"3d8b7f97-e18e-49a9-a3a2-cfd01da217a8","Type":"ContainerStarted","Data":"e97f4ed50934e1b49b2e47e07e5bcb04fc0920b191c2dd90519d24c875345be4"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.272244 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bnnc" event={"ID":"3d8b7f97-e18e-49a9-a3a2-cfd01da217a8","Type":"ContainerStarted","Data":"8a30a709974bb04e95e110bae20645d6bd30f0372f583f47713ee9fc996e9f2d"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.282846 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf" event={"ID":"b0b8a168-87d3-47a0-8527-6252cf9743df","Type":"ContainerStarted","Data":"5c051a2098baa59ff49f48eceeb3c1690f2c6efff6d506d1ab6a3b8f102901ff"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.290092 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rlqmc" event={"ID":"14019d5a-595b-46f5-98f0-bed12ff9ab9f","Type":"ContainerStarted","Data":"da53c41719fb9a276ee5ca52394184f691b3055add31e2c4e3189a4197567153"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.290162 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rlqmc" event={"ID":"14019d5a-595b-46f5-98f0-bed12ff9ab9f","Type":"ContainerStarted","Data":"75daf1b12a62514641f79fa34f975e9f4f9593ea75f13dcea7f004a13ed954e7"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.292055 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-rlqmc" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.292483 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" event={"ID":"688723bd-e83c-43b0-b21a-83cf348544bd","Type":"ContainerStarted","Data":"ebdc653cece7379f1d05eb1344203d3bf53d0f6794e4c34e2855f087eadaf86d"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.292930 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.301373 4718 patch_prober.go:28] interesting pod/downloads-7954f5f757-rlqmc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.301429 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rlqmc" podUID="14019d5a-595b-46f5-98f0-bed12ff9ab9f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.304156 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qwldq" event={"ID":"a29dbc90-997d-4f83-8151-1cfcca661070","Type":"ContainerStarted","Data":"159018db873181b39fe89ecec3a5698f0dac1136ff9e5cc911b40aa7f07eecec"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.320361 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fc5k" event={"ID":"34588296-9ec4-4018-ab87-cfbec5d33d98","Type":"ContainerStarted","Data":"5e6776b1f48dae120b2ec62568115f4c57586d09b6007ee0094e4d375a3937b8"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.320509 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fc5k" event={"ID":"34588296-9ec4-4018-ab87-cfbec5d33d98","Type":"ContainerStarted","Data":"ad0ea8164efe2bfc7a1e35e5b040e4d5795c8f54d156df6e2cf364a31f9bf704"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.321124 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.330750 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-rlqmc" podStartSLOduration=125.330728052 podStartE2EDuration="2m5.330728052s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:03.319997878 +0000 UTC m=+144.467239869" watchObservedRunningTime="2026-01-23 16:19:03.330728052 +0000 UTC m=+144.477970043" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.331716 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2m4vg" podStartSLOduration=124.331709657 podStartE2EDuration="2m4.331709657s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:03.252556427 +0000 UTC m=+144.399798418" watchObservedRunningTime="2026-01-23 16:19:03.331709657 +0000 UTC m=+144.478951648" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.334848 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-skvf4" event={"ID":"8f83a3e0-3cbf-4c84-b243-7501b6cb6a8a","Type":"ContainerStarted","Data":"0a2dbd766dd9310c80ee6841743b186e373eeaa17c216ca7eba4b7ff52cf1f62"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.335651 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4btm4" podStartSLOduration=124.335609636 podStartE2EDuration="2m4.335609636s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:03.335197356 +0000 UTC m=+144.482439367" watchObservedRunningTime="2026-01-23 16:19:03.335609636 +0000 UTC m=+144.482851627" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.352077 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-69wt2" event={"ID":"7a72377e-a621-4ebb-b31a-7f405b218eb6","Type":"ContainerStarted","Data":"6ddbbd316b4caac97c645149e00ee276f7c0b4d3c982bfccec873595ed5799ad"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.363602 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:03 crc kubenswrapper[4718]: E0123 16:19:03.363968 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:03.86395282 +0000 UTC m=+145.011194811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.371391 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n58ck" event={"ID":"bbd71c65-e596-40fe-8f5a-86d849c44b24","Type":"ContainerStarted","Data":"49823381ea7d9a7fda33c4590eb1b85a810426ba2be86034dc5ea3a2cac5b37b"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.371935 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-n58ck" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.375772 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" event={"ID":"2d31ba73-9659-4b08-bd23-26a4f51835bf","Type":"ContainerStarted","Data":"3b021362fd17677bc9fc503e2748bbb07c685e5121c187da2ec14d3d93e433b9"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.375813 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" event={"ID":"2d31ba73-9659-4b08-bd23-26a4f51835bf","Type":"ContainerStarted","Data":"8e2af7444500b20942e79ae55146847d46a3f4025ec8b25c2962037a7169fb34"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.376493 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.386997 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4kts8" event={"ID":"44ee7633-a090-4fc2-80ce-2e4fe0ddcad9","Type":"ContainerStarted","Data":"1eaec409f651f6f52d2c408db0824bba6a60728a34aff30234eae9b25a09e9ac"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.387075 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4kts8" event={"ID":"44ee7633-a090-4fc2-80ce-2e4fe0ddcad9","Type":"ContainerStarted","Data":"e663dae7705ea1d41d233579aebebb742839dd0b393b3dc6df7a8f54c4af330b"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.393033 4718 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-p6xg6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.393354 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" podUID="2d31ba73-9659-4b08-bd23-26a4f51835bf" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.399573 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vppcb" event={"ID":"6f3f6151-c368-4b06-a85e-a8b9d3969ca0","Type":"ContainerStarted","Data":"89fc3af3d796d5ad0e9c8efdb1a60bbf7025182a25255ed4260fa702e5a0cdca"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.410869 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fc5k" podStartSLOduration=124.410847347 podStartE2EDuration="2m4.410847347s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:03.408538027 +0000 UTC m=+144.555780028" watchObservedRunningTime="2026-01-23 16:19:03.410847347 +0000 UTC m=+144.558089338" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.416584 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq" event={"ID":"01f81b66-e3c3-411c-9f5a-db54b3a3aa1d","Type":"ContainerStarted","Data":"93865a3aac663d80988cfef8cfdb22734216dffc1b4afb7acf5e23039b89edcb"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.416652 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq" event={"ID":"01f81b66-e3c3-411c-9f5a-db54b3a3aa1d","Type":"ContainerStarted","Data":"1caad4a5279c5527f5bff30e766c2ab13e2930dc4a941b5f7d3bd0df392fe156"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.417666 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.433082 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-69wt2" podStartSLOduration=124.433061534 podStartE2EDuration="2m4.433061534s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:03.432220133 +0000 UTC m=+144.579462124" watchObservedRunningTime="2026-01-23 16:19:03.433061534 +0000 UTC m=+144.580303525" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.440821 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-th59x" event={"ID":"05b88402-e2e8-4f32-a75e-1f434da51313","Type":"ContainerStarted","Data":"91497b98bf8fc30cb468d6eeb13b19bd76a5837db21b6103a39600ffa3dda391"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.464845 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:03 crc kubenswrapper[4718]: E0123 16:19:03.466457 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:03.966438956 +0000 UTC m=+145.113680947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.470266 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d" event={"ID":"6ec99fe0-df93-4747-917e-ed13e27f917f","Type":"ContainerStarted","Data":"b087a7e5579a89d22afa04740840c325d37d91e212e0d6e6665b362d24f84ea2"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.470322 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d" event={"ID":"6ec99fe0-df93-4747-917e-ed13e27f917f","Type":"ContainerStarted","Data":"7d78c4a7b4b8ed85b5d04f67e9390f36579e9ae0cdc52a4d9b6520a9da9ee767"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.471264 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.483003 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.498917 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.499197 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mn2k5" event={"ID":"c8067326-df8d-4bb1-8cb4-bca3e592a6b0","Type":"ContainerStarted","Data":"8f0ec73881fbe7bc96e6b1ce006f780e9b56507f621f1b246225ba4d7c21b2a2"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.499333 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mn2k5" event={"ID":"c8067326-df8d-4bb1-8cb4-bca3e592a6b0","Type":"ContainerStarted","Data":"83d557d1645ffbeec57fe23d30747ebc821f57321c42d022fe78e473af84ee83"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.500009 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" podStartSLOduration=124.499996002 podStartE2EDuration="2m4.499996002s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:03.467121263 +0000 UTC m=+144.614363254" watchObservedRunningTime="2026-01-23 16:19:03.499996002 +0000 UTC m=+144.647237993" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.520019 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-vppcb" podStartSLOduration=124.519996843 podStartE2EDuration="2m4.519996843s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:03.500161277 +0000 UTC m=+144.647403268" watchObservedRunningTime="2026-01-23 16:19:03.519996843 +0000 UTC m=+144.667238834" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.523572 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c" event={"ID":"74655295-4c96-4870-b700-b98b7a1e176e","Type":"ContainerStarted","Data":"9b9e6c4a115db5a667ad483c1897b85c47d26e31cc7bc4778b8dab16f6b22381"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.531928 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c" event={"ID":"74655295-4c96-4870-b700-b98b7a1e176e","Type":"ContainerStarted","Data":"ce8fd23e73f585af961ef89af02f38319d9a7dabb7c27b67c3e793d65b8af42e"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.532013 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5p7bb" event={"ID":"045e9b38-2543-4b98-919f-55227f2094a9","Type":"ContainerStarted","Data":"894f0e009807ef9e052ca377c0bdb915ceea656fc3124e41dc2beb511ac3f1a9"} Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.532081 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-5p7bb" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.547513 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-4kts8" podStartSLOduration=124.547486625 podStartE2EDuration="2m4.547486625s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:03.532557604 +0000 UTC m=+144.679799595" watchObservedRunningTime="2026-01-23 16:19:03.547486625 +0000 UTC m=+144.694728616" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.553170 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.561501 4718 patch_prober.go:28] interesting pod/console-operator-58897d9998-5p7bb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/readyz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.561558 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-5p7bb" podUID="045e9b38-2543-4b98-919f-55227f2094a9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/readyz\": dial tcp 10.217.0.38:8443: connect: connection refused" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.568340 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-n58ck" podStartSLOduration=9.568324466 podStartE2EDuration="9.568324466s" podCreationTimestamp="2026-01-23 16:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:03.567586388 +0000 UTC m=+144.714828389" watchObservedRunningTime="2026-01-23 16:19:03.568324466 +0000 UTC m=+144.715566457" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.575951 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:03 crc kubenswrapper[4718]: E0123 16:19:03.580265 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.080237571 +0000 UTC m=+145.227479562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.627188 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:03 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:03 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:03 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.627647 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.677091 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:03 crc kubenswrapper[4718]: E0123 16:19:03.677907 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.177883584 +0000 UTC m=+145.325125575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.678949 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ktp9d" podStartSLOduration=124.67893334 podStartE2EDuration="2m4.67893334s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:03.678671073 +0000 UTC m=+144.825913064" watchObservedRunningTime="2026-01-23 16:19:03.67893334 +0000 UTC m=+144.826175331" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.730734 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mn2k5" podStartSLOduration=124.730699981 podStartE2EDuration="2m4.730699981s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:03.727870969 +0000 UTC m=+144.875112960" watchObservedRunningTime="2026-01-23 16:19:03.730699981 +0000 UTC m=+144.877941972" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.756314 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-5p7bb" podStartSLOduration=125.756285645 podStartE2EDuration="2m5.756285645s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:03.753207425 +0000 UTC m=+144.900449416" watchObservedRunningTime="2026-01-23 16:19:03.756285645 +0000 UTC m=+144.903527636" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.786567 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:03 crc kubenswrapper[4718]: E0123 16:19:03.787175 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.287161023 +0000 UTC m=+145.434403014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.835752 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j57jq" podStartSLOduration=124.835730802 podStartE2EDuration="2m4.835730802s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:03.827050261 +0000 UTC m=+144.974292252" watchObservedRunningTime="2026-01-23 16:19:03.835730802 +0000 UTC m=+144.982972793" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.886593 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c" podStartSLOduration=125.88657106 podStartE2EDuration="2m5.88657106s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:03.884307832 +0000 UTC m=+145.031549823" watchObservedRunningTime="2026-01-23 16:19:03.88657106 +0000 UTC m=+145.033813051" Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.887926 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:03 crc kubenswrapper[4718]: E0123 16:19:03.888276 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.388259423 +0000 UTC m=+145.535501414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:03 crc kubenswrapper[4718]: I0123 16:19:03.989396 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:03 crc kubenswrapper[4718]: E0123 16:19:03.989801 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.489790785 +0000 UTC m=+145.637032776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.090445 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:04 crc kubenswrapper[4718]: E0123 16:19:04.090686 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.590653429 +0000 UTC m=+145.737895420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.091164 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:04 crc kubenswrapper[4718]: E0123 16:19:04.091743 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.591734737 +0000 UTC m=+145.738976728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.193125 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:04 crc kubenswrapper[4718]: E0123 16:19:04.193411 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.693364211 +0000 UTC m=+145.840606202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.193852 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:04 crc kubenswrapper[4718]: E0123 16:19:04.194377 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.694369787 +0000 UTC m=+145.841611778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.228544 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-23 16:14:03 +0000 UTC, rotation deadline is 2026-10-29 04:13:19.357730292 +0000 UTC Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.228601 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6683h54m15.129133792s for next certificate rotation Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.295517 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:04 crc kubenswrapper[4718]: E0123 16:19:04.295768 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.795728274 +0000 UTC m=+145.942970265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.295953 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:04 crc kubenswrapper[4718]: E0123 16:19:04.296271 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.796257087 +0000 UTC m=+145.943499078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.398065 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:04 crc kubenswrapper[4718]: E0123 16:19:04.398283 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.898252771 +0000 UTC m=+146.045494762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.398508 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:04 crc kubenswrapper[4718]: E0123 16:19:04.398846 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.898839136 +0000 UTC m=+146.046081127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:04 crc kubenswrapper[4718]: E0123 16:19:04.499533 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:04.999515816 +0000 UTC m=+146.146757807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.499455 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.499815 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:04 crc kubenswrapper[4718]: E0123 16:19:04.500150 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:05.000134971 +0000 UTC m=+146.147376962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.556425 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bnnc" event={"ID":"3d8b7f97-e18e-49a9-a3a2-cfd01da217a8","Type":"ContainerStarted","Data":"b70d225493c9b6ee2eb3be36b157db658c9f0f4a035890d1b6aab9682284c5a3"} Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.556556 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bnnc" Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.562239 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf" event={"ID":"b0b8a168-87d3-47a0-8527-6252cf9743df","Type":"ContainerStarted","Data":"e1672b07c5e11c1c92a6a937b39ade70bd7a11915b1190a9d20c5e8abfb12bd6"} Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.562284 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf" event={"ID":"b0b8a168-87d3-47a0-8527-6252cf9743df","Type":"ContainerStarted","Data":"f3d6a9d31395b9e12c1240849423ceaff8c7e95ecb8e4c3adcc00b9a3976189b"} Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.566656 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2fc5k" event={"ID":"34588296-9ec4-4018-ab87-cfbec5d33d98","Type":"ContainerStarted","Data":"dffab39bc663b0f36495a6304fc69a80784ceac0d9d2e42180ded4ca1fea4983"} Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.569557 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-th59x" event={"ID":"05b88402-e2e8-4f32-a75e-1f434da51313","Type":"ContainerStarted","Data":"d8c281aa523aab550a1f8e2cf002384c089d1bf997d1cc3d37a8d82c65b10679"} Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.569614 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-th59x" event={"ID":"05b88402-e2e8-4f32-a75e-1f434da51313","Type":"ContainerStarted","Data":"9ddee39ce33cc404b52046afd6830042becf622edc947962183cbe461ed7a288"} Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.570826 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:04 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:04 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:04 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.570876 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.572140 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-skvf4" event={"ID":"8f83a3e0-3cbf-4c84-b243-7501b6cb6a8a","Type":"ContainerStarted","Data":"61fd8dab4852954dcf05901cd081918235bc7c1877bfaed4d529701326892e63"} Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.572173 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-skvf4" event={"ID":"8f83a3e0-3cbf-4c84-b243-7501b6cb6a8a","Type":"ContainerStarted","Data":"833e0643e1bcab314c920d0be18c1f5958eaf39984b82d30f1c2f2a7304dc29c"} Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.577748 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d4557" event={"ID":"895467b8-6fd4-4822-be4a-c6576d88b855","Type":"ContainerStarted","Data":"1caf983eaf7e07554e0485447faf17a61241eb46ee03b8a4af3def14b03d8ad4"} Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.580538 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qwldq" event={"ID":"a29dbc90-997d-4f83-8151-1cfcca661070","Type":"ContainerStarted","Data":"b4ab71f28d610a659ef3da31c356edfaa6b2000d66bdd1282fedc39c1ca575bb"} Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.582202 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5p7bb" event={"ID":"045e9b38-2543-4b98-919f-55227f2094a9","Type":"ContainerStarted","Data":"6fc0a26809f174d30759d5e712a28a2436fdc6f77b90deef4c6760194568eeac"} Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.589525 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" event={"ID":"a2118990-95a4-4a61-8c6a-3a72bdea8642","Type":"ContainerStarted","Data":"b5a5cd6dcdb4787ed9292b1bddcb7f0435b2ae1f7a4e563dd23cfe6d6e8bda54"} Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.598259 4718 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-p6xg6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.598283 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-stp9k" event={"ID":"1a485827-90f5-4846-a975-b61eefef257f","Type":"ContainerStarted","Data":"1d7b3db03ec64c956566502e4f034d03c1d4f7795bc1ad37ca5eb3dd549641a8"} Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.598312 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" podUID="2d31ba73-9659-4b08-bd23-26a4f51835bf" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.599339 4718 patch_prober.go:28] interesting pod/downloads-7954f5f757-rlqmc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.599386 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rlqmc" podUID="14019d5a-595b-46f5-98f0-bed12ff9ab9f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.600369 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:04 crc kubenswrapper[4718]: E0123 16:19:04.600902 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:05.100888943 +0000 UTC m=+146.248130934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.601620 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:04 crc kubenswrapper[4718]: E0123 16:19:04.603705 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:05.103690285 +0000 UTC m=+146.250932276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.612193 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bnnc" podStartSLOduration=125.612169001 podStartE2EDuration="2m5.612169001s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:04.587215404 +0000 UTC m=+145.734457395" watchObservedRunningTime="2026-01-23 16:19:04.612169001 +0000 UTC m=+145.759410992" Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.614222 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-skvf4" podStartSLOduration=125.614214494 podStartE2EDuration="2m5.614214494s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:04.611422382 +0000 UTC m=+145.758664383" watchObservedRunningTime="2026-01-23 16:19:04.614214494 +0000 UTC m=+145.761456475" Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.617852 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.655927 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-th59x" podStartSLOduration=125.655910528 podStartE2EDuration="2m5.655910528s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:04.655554889 +0000 UTC m=+145.802796880" watchObservedRunningTime="2026-01-23 16:19:04.655910528 +0000 UTC m=+145.803152519" Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.679467 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-qwldq" podStartSLOduration=126.679445708 podStartE2EDuration="2m6.679445708s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:04.677593351 +0000 UTC m=+145.824835342" watchObservedRunningTime="2026-01-23 16:19:04.679445708 +0000 UTC m=+145.826687699" Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.703546 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2v8tf" podStartSLOduration=125.703524764 podStartE2EDuration="2m5.703524764s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:04.70181903 +0000 UTC m=+145.849061021" watchObservedRunningTime="2026-01-23 16:19:04.703524764 +0000 UTC m=+145.850766755" Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.705793 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:04 crc kubenswrapper[4718]: E0123 16:19:04.706082 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:05.206035618 +0000 UTC m=+146.353277759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.712654 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:04 crc kubenswrapper[4718]: E0123 16:19:04.712977 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:05.212959514 +0000 UTC m=+146.360201505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.728698 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d4557" podStartSLOduration=125.728684776 podStartE2EDuration="2m5.728684776s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:04.726792117 +0000 UTC m=+145.874034108" watchObservedRunningTime="2026-01-23 16:19:04.728684776 +0000 UTC m=+145.875926767" Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.765411 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-stp9k" podStartSLOduration=125.765396123 podStartE2EDuration="2m5.765396123s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:04.76415553 +0000 UTC m=+145.911397521" watchObservedRunningTime="2026-01-23 16:19:04.765396123 +0000 UTC m=+145.912638114" Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.813749 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:04 crc kubenswrapper[4718]: E0123 16:19:04.814094 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:05.314071745 +0000 UTC m=+146.461313736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.915679 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.916047 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.916087 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:04 crc kubenswrapper[4718]: E0123 16:19:04.926122 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:05.426105845 +0000 UTC m=+146.573347836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.928879 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:19:04 crc kubenswrapper[4718]: I0123 16:19:04.940988 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.003969 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.013101 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-967v4"] Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.014498 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-967v4" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.016717 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.016857 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.016949 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:19:05 crc kubenswrapper[4718]: E0123 16:19:05.023295 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:05.523258025 +0000 UTC m=+146.670500036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.025314 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.031297 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.034427 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.048063 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-967v4"] Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.122329 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.122388 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/219b51b6-4118-4212-94a2-48d6b2116112-catalog-content\") pod \"community-operators-967v4\" (UID: \"219b51b6-4118-4212-94a2-48d6b2116112\") " pod="openshift-marketplace/community-operators-967v4" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.122433 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/219b51b6-4118-4212-94a2-48d6b2116112-utilities\") pod \"community-operators-967v4\" (UID: \"219b51b6-4118-4212-94a2-48d6b2116112\") " pod="openshift-marketplace/community-operators-967v4" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.122462 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chvdn\" (UniqueName: \"kubernetes.io/projected/219b51b6-4118-4212-94a2-48d6b2116112-kube-api-access-chvdn\") pod \"community-operators-967v4\" (UID: \"219b51b6-4118-4212-94a2-48d6b2116112\") " pod="openshift-marketplace/community-operators-967v4" Jan 23 16:19:05 crc kubenswrapper[4718]: E0123 16:19:05.122767 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:05.622755364 +0000 UTC m=+146.769997355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.195766 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9xdfb"] Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.196612 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xdfb" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.198431 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.214482 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9xdfb"] Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.223816 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.239403 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/219b51b6-4118-4212-94a2-48d6b2116112-catalog-content\") pod \"community-operators-967v4\" (UID: \"219b51b6-4118-4212-94a2-48d6b2116112\") " pod="openshift-marketplace/community-operators-967v4" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.239534 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dace0865-fb1e-42df-b857-85285c561bb8-catalog-content\") pod \"certified-operators-9xdfb\" (UID: \"dace0865-fb1e-42df-b857-85285c561bb8\") " pod="openshift-marketplace/certified-operators-9xdfb" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.239568 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/219b51b6-4118-4212-94a2-48d6b2116112-utilities\") pod \"community-operators-967v4\" (UID: \"219b51b6-4118-4212-94a2-48d6b2116112\") " pod="openshift-marketplace/community-operators-967v4" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.239643 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frg4g\" (UniqueName: \"kubernetes.io/projected/dace0865-fb1e-42df-b857-85285c561bb8-kube-api-access-frg4g\") pod \"certified-operators-9xdfb\" (UID: \"dace0865-fb1e-42df-b857-85285c561bb8\") " pod="openshift-marketplace/certified-operators-9xdfb" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.239663 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chvdn\" (UniqueName: \"kubernetes.io/projected/219b51b6-4118-4212-94a2-48d6b2116112-kube-api-access-chvdn\") pod \"community-operators-967v4\" (UID: \"219b51b6-4118-4212-94a2-48d6b2116112\") " pod="openshift-marketplace/community-operators-967v4" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.239784 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dace0865-fb1e-42df-b857-85285c561bb8-utilities\") pod \"certified-operators-9xdfb\" (UID: \"dace0865-fb1e-42df-b857-85285c561bb8\") " pod="openshift-marketplace/certified-operators-9xdfb" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.240230 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/219b51b6-4118-4212-94a2-48d6b2116112-catalog-content\") pod \"community-operators-967v4\" (UID: \"219b51b6-4118-4212-94a2-48d6b2116112\") " pod="openshift-marketplace/community-operators-967v4" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.240445 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/219b51b6-4118-4212-94a2-48d6b2116112-utilities\") pod \"community-operators-967v4\" (UID: \"219b51b6-4118-4212-94a2-48d6b2116112\") " pod="openshift-marketplace/community-operators-967v4" Jan 23 16:19:05 crc kubenswrapper[4718]: E0123 16:19:05.240758 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:05.740744736 +0000 UTC m=+146.887986727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.270493 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.293087 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.298979 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chvdn\" (UniqueName: \"kubernetes.io/projected/219b51b6-4118-4212-94a2-48d6b2116112-kube-api-access-chvdn\") pod \"community-operators-967v4\" (UID: \"219b51b6-4118-4212-94a2-48d6b2116112\") " pod="openshift-marketplace/community-operators-967v4" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.353503 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.353893 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dace0865-fb1e-42df-b857-85285c561bb8-utilities\") pod \"certified-operators-9xdfb\" (UID: \"dace0865-fb1e-42df-b857-85285c561bb8\") " pod="openshift-marketplace/certified-operators-9xdfb" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.353934 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dace0865-fb1e-42df-b857-85285c561bb8-catalog-content\") pod \"certified-operators-9xdfb\" (UID: \"dace0865-fb1e-42df-b857-85285c561bb8\") " pod="openshift-marketplace/certified-operators-9xdfb" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.353968 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frg4g\" (UniqueName: \"kubernetes.io/projected/dace0865-fb1e-42df-b857-85285c561bb8-kube-api-access-frg4g\") pod \"certified-operators-9xdfb\" (UID: \"dace0865-fb1e-42df-b857-85285c561bb8\") " pod="openshift-marketplace/certified-operators-9xdfb" Jan 23 16:19:05 crc kubenswrapper[4718]: E0123 16:19:05.354474 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:05.854462889 +0000 UTC m=+147.001704870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.355314 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dace0865-fb1e-42df-b857-85285c561bb8-utilities\") pod \"certified-operators-9xdfb\" (UID: \"dace0865-fb1e-42df-b857-85285c561bb8\") " pod="openshift-marketplace/certified-operators-9xdfb" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.355767 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dace0865-fb1e-42df-b857-85285c561bb8-catalog-content\") pod \"certified-operators-9xdfb\" (UID: \"dace0865-fb1e-42df-b857-85285c561bb8\") " pod="openshift-marketplace/certified-operators-9xdfb" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.393142 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frg4g\" (UniqueName: \"kubernetes.io/projected/dace0865-fb1e-42df-b857-85285c561bb8-kube-api-access-frg4g\") pod \"certified-operators-9xdfb\" (UID: \"dace0865-fb1e-42df-b857-85285c561bb8\") " pod="openshift-marketplace/certified-operators-9xdfb" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.410931 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-967v4" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.433431 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sdbp8"] Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.434311 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdbp8" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.457118 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:05 crc kubenswrapper[4718]: E0123 16:19:05.457420 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:05.957391696 +0000 UTC m=+147.104633687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.459648 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sdbp8"] Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.510921 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xdfb" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.559393 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tng7d\" (UniqueName: \"kubernetes.io/projected/b8bc3317-7018-4527-a8f8-7d27072bd326-kube-api-access-tng7d\") pod \"community-operators-sdbp8\" (UID: \"b8bc3317-7018-4527-a8f8-7d27072bd326\") " pod="openshift-marketplace/community-operators-sdbp8" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.559438 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bc3317-7018-4527-a8f8-7d27072bd326-catalog-content\") pod \"community-operators-sdbp8\" (UID: \"b8bc3317-7018-4527-a8f8-7d27072bd326\") " pod="openshift-marketplace/community-operators-sdbp8" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.559475 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bc3317-7018-4527-a8f8-7d27072bd326-utilities\") pod \"community-operators-sdbp8\" (UID: \"b8bc3317-7018-4527-a8f8-7d27072bd326\") " pod="openshift-marketplace/community-operators-sdbp8" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.559505 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:05 crc kubenswrapper[4718]: E0123 16:19:05.559804 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:06.0597925 +0000 UTC m=+147.207034481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.582948 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:05 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:05 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:05 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.583012 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.583035 4718 patch_prober.go:28] interesting pod/console-operator-58897d9998-5p7bb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.583099 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-5p7bb" podUID="045e9b38-2543-4b98-919f-55227f2094a9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.614726 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p7jnq"] Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.615599 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7jnq" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.633514 4718 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.649662 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p7jnq"] Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.668221 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.668413 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt8th\" (UniqueName: \"kubernetes.io/projected/4b400da1-43ec-45d6-87ee-3baea6d9f22d-kube-api-access-dt8th\") pod \"certified-operators-p7jnq\" (UID: \"4b400da1-43ec-45d6-87ee-3baea6d9f22d\") " pod="openshift-marketplace/certified-operators-p7jnq" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.668465 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tng7d\" (UniqueName: \"kubernetes.io/projected/b8bc3317-7018-4527-a8f8-7d27072bd326-kube-api-access-tng7d\") pod \"community-operators-sdbp8\" (UID: \"b8bc3317-7018-4527-a8f8-7d27072bd326\") " pod="openshift-marketplace/community-operators-sdbp8" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.668487 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b400da1-43ec-45d6-87ee-3baea6d9f22d-utilities\") pod \"certified-operators-p7jnq\" (UID: \"4b400da1-43ec-45d6-87ee-3baea6d9f22d\") " pod="openshift-marketplace/certified-operators-p7jnq" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.668508 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bc3317-7018-4527-a8f8-7d27072bd326-catalog-content\") pod \"community-operators-sdbp8\" (UID: \"b8bc3317-7018-4527-a8f8-7d27072bd326\") " pod="openshift-marketplace/community-operators-sdbp8" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.668540 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b400da1-43ec-45d6-87ee-3baea6d9f22d-catalog-content\") pod \"certified-operators-p7jnq\" (UID: \"4b400da1-43ec-45d6-87ee-3baea6d9f22d\") " pod="openshift-marketplace/certified-operators-p7jnq" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.668557 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bc3317-7018-4527-a8f8-7d27072bd326-utilities\") pod \"community-operators-sdbp8\" (UID: \"b8bc3317-7018-4527-a8f8-7d27072bd326\") " pod="openshift-marketplace/community-operators-sdbp8" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.668980 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bc3317-7018-4527-a8f8-7d27072bd326-utilities\") pod \"community-operators-sdbp8\" (UID: \"b8bc3317-7018-4527-a8f8-7d27072bd326\") " pod="openshift-marketplace/community-operators-sdbp8" Jan 23 16:19:05 crc kubenswrapper[4718]: E0123 16:19:05.669048 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:06.169034718 +0000 UTC m=+147.316276709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.670614 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bc3317-7018-4527-a8f8-7d27072bd326-catalog-content\") pod \"community-operators-sdbp8\" (UID: \"b8bc3317-7018-4527-a8f8-7d27072bd326\") " pod="openshift-marketplace/community-operators-sdbp8" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.678514 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" event={"ID":"a2118990-95a4-4a61-8c6a-3a72bdea8642","Type":"ContainerStarted","Data":"191bc7ae02f1ee7209f1e19712bc8efca4302bf2c96b8c9f4dae71cf47f8069c"} Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.678552 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" event={"ID":"a2118990-95a4-4a61-8c6a-3a72bdea8642","Type":"ContainerStarted","Data":"837850e7a25ea19a13924c9475c70694f230c5c821f5c3d39e545e7e8b72fef1"} Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.712464 4718 patch_prober.go:28] interesting pod/downloads-7954f5f757-rlqmc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.712518 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rlqmc" podUID="14019d5a-595b-46f5-98f0-bed12ff9ab9f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.721279 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.766869 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-5p7bb" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.770796 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b400da1-43ec-45d6-87ee-3baea6d9f22d-utilities\") pod \"certified-operators-p7jnq\" (UID: \"4b400da1-43ec-45d6-87ee-3baea6d9f22d\") " pod="openshift-marketplace/certified-operators-p7jnq" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.774465 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tng7d\" (UniqueName: \"kubernetes.io/projected/b8bc3317-7018-4527-a8f8-7d27072bd326-kube-api-access-tng7d\") pod \"community-operators-sdbp8\" (UID: \"b8bc3317-7018-4527-a8f8-7d27072bd326\") " pod="openshift-marketplace/community-operators-sdbp8" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.783740 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b400da1-43ec-45d6-87ee-3baea6d9f22d-utilities\") pod \"certified-operators-p7jnq\" (UID: \"4b400da1-43ec-45d6-87ee-3baea6d9f22d\") " pod="openshift-marketplace/certified-operators-p7jnq" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.783979 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b400da1-43ec-45d6-87ee-3baea6d9f22d-catalog-content\") pod \"certified-operators-p7jnq\" (UID: \"4b400da1-43ec-45d6-87ee-3baea6d9f22d\") " pod="openshift-marketplace/certified-operators-p7jnq" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.784461 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b400da1-43ec-45d6-87ee-3baea6d9f22d-catalog-content\") pod \"certified-operators-p7jnq\" (UID: \"4b400da1-43ec-45d6-87ee-3baea6d9f22d\") " pod="openshift-marketplace/certified-operators-p7jnq" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.784785 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.784982 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt8th\" (UniqueName: \"kubernetes.io/projected/4b400da1-43ec-45d6-87ee-3baea6d9f22d-kube-api-access-dt8th\") pod \"certified-operators-p7jnq\" (UID: \"4b400da1-43ec-45d6-87ee-3baea6d9f22d\") " pod="openshift-marketplace/certified-operators-p7jnq" Jan 23 16:19:05 crc kubenswrapper[4718]: E0123 16:19:05.787861 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:06.287836681 +0000 UTC m=+147.435078672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.819820 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt8th\" (UniqueName: \"kubernetes.io/projected/4b400da1-43ec-45d6-87ee-3baea6d9f22d-kube-api-access-dt8th\") pod \"certified-operators-p7jnq\" (UID: \"4b400da1-43ec-45d6-87ee-3baea6d9f22d\") " pod="openshift-marketplace/certified-operators-p7jnq" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.834819 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdbp8" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.887696 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:05 crc kubenswrapper[4718]: E0123 16:19:05.890142 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:06.390118781 +0000 UTC m=+147.537360772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.987832 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7jnq" Jan 23 16:19:05 crc kubenswrapper[4718]: I0123 16:19:05.990612 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:05 crc kubenswrapper[4718]: E0123 16:19:05.990937 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:06.490925525 +0000 UTC m=+147.638167516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.093342 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:06 crc kubenswrapper[4718]: E0123 16:19:06.093514 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:06.593476003 +0000 UTC m=+147.740717994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.093905 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:06 crc kubenswrapper[4718]: E0123 16:19:06.094304 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:06.594292613 +0000 UTC m=+147.741534604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.195440 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:06 crc kubenswrapper[4718]: E0123 16:19:06.195812 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:06.695784634 +0000 UTC m=+147.843026625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.196095 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:06 crc kubenswrapper[4718]: E0123 16:19:06.196430 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:06.696417681 +0000 UTC m=+147.843659662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.206579 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-967v4"] Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.298159 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:06 crc kubenswrapper[4718]: E0123 16:19:06.298383 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:19:06.798352642 +0000 UTC m=+147.945594633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.298454 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:06 crc kubenswrapper[4718]: E0123 16:19:06.298808 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:19:06.798800234 +0000 UTC m=+147.946042225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmshq" (UID: "366c0aee-b870-49b2-8500-06f6529c270c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.316107 4718 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-23T16:19:05.633875031Z","Handler":null,"Name":""} Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.320966 4718 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.321006 4718 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.400431 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.510321 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.529538 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9xdfb"] Jan 23 16:19:06 crc kubenswrapper[4718]: W0123 16:19:06.536395 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddace0865_fb1e_42df_b857_85285c561bb8.slice/crio-f8500662887908442ecb73489f2f28e4e8c264560c0fd52639028d7be8f16b7c WatchSource:0}: Error finding container f8500662887908442ecb73489f2f28e4e8c264560c0fd52639028d7be8f16b7c: Status 404 returned error can't find the container with id f8500662887908442ecb73489f2f28e4e8c264560c0fd52639028d7be8f16b7c Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.570270 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p7jnq"] Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.579825 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:06 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:06 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:06 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.579904 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.585439 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sdbp8"] Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.605480 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:06 crc kubenswrapper[4718]: W0123 16:19:06.607380 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b400da1_43ec_45d6_87ee_3baea6d9f22d.slice/crio-f983c62892b46c068cb917efb0da242fff0e684b8b5b384cd3a9ed1fed2c7aec WatchSource:0}: Error finding container f983c62892b46c068cb917efb0da242fff0e684b8b5b384cd3a9ed1fed2c7aec: Status 404 returned error can't find the container with id f983c62892b46c068cb917efb0da242fff0e684b8b5b384cd3a9ed1fed2c7aec Jan 23 16:19:06 crc kubenswrapper[4718]: W0123 16:19:06.610601 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8bc3317_7018_4527_a8f8_7d27072bd326.slice/crio-bc104bc4202e60a5e1de8663ba1cb7d84be8b656f3f97491b5a8d605717a1886 WatchSource:0}: Error finding container bc104bc4202e60a5e1de8663ba1cb7d84be8b656f3f97491b5a8d605717a1886: Status 404 returned error can't find the container with id bc104bc4202e60a5e1de8663ba1cb7d84be8b656f3f97491b5a8d605717a1886 Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.612980 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.613014 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.647353 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmshq\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.663314 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.671408 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.671475 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.680869 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.687919 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7jnq" event={"ID":"4b400da1-43ec-45d6-87ee-3baea6d9f22d","Type":"ContainerStarted","Data":"f983c62892b46c068cb917efb0da242fff0e684b8b5b384cd3a9ed1fed2c7aec"} Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.695489 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" event={"ID":"a2118990-95a4-4a61-8c6a-3a72bdea8642","Type":"ContainerStarted","Data":"3efee1f6fa4d351dbe5718c29eb6e6f4091bc3b4ea81332e5762a67221c07250"} Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.708186 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5403a16e3db891198b3d24f5df2b1e907b55735d61c719c49bf64ba7b353de25"} Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.708268 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"db3e9676a04875ddd23134fc6354be24bb1f9b44a750de94dd730a0687db517d"} Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.716047 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5575bfe856d20602964ecadb9717f1aa6eea701c190e6a3370edceefc4a26db3"} Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.716108 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ebd1e8d0ab8a17ca89e988f48fb108f2dcfb714594649385a21a37ba78995b5e"} Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.716477 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.732002 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"eb832ad64bd79f13362878c3cccb1ba694db5fb89116747bdd2e5eee2d98f7cb"} Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.732059 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"01ab9c104be6a671c9928e82eca705d863590712c4773234cdf5dbf733f0a69d"} Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.738496 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xdfb" event={"ID":"dace0865-fb1e-42df-b857-85285c561bb8","Type":"ContainerStarted","Data":"705edfdf79362a05af397d481334409f9ceb90733aa126df078a96effb4fcfc1"} Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.738557 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xdfb" event={"ID":"dace0865-fb1e-42df-b857-85285c561bb8","Type":"ContainerStarted","Data":"f8500662887908442ecb73489f2f28e4e8c264560c0fd52639028d7be8f16b7c"} Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.740775 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdbp8" event={"ID":"b8bc3317-7018-4527-a8f8-7d27072bd326","Type":"ContainerStarted","Data":"bc104bc4202e60a5e1de8663ba1cb7d84be8b656f3f97491b5a8d605717a1886"} Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.741995 4718 generic.go:334] "Generic (PLEG): container finished" podID="74655295-4c96-4870-b700-b98b7a1e176e" containerID="9b9e6c4a115db5a667ad483c1897b85c47d26e31cc7bc4778b8dab16f6b22381" exitCode=0 Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.742050 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c" event={"ID":"74655295-4c96-4870-b700-b98b7a1e176e","Type":"ContainerDied","Data":"9b9e6c4a115db5a667ad483c1897b85c47d26e31cc7bc4778b8dab16f6b22381"} Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.742559 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.742623 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.748332 4718 generic.go:334] "Generic (PLEG): container finished" podID="219b51b6-4118-4212-94a2-48d6b2116112" containerID="99c104d6a1022e7180a2310bda9b658fc011dc9b6c2725826b818d1c4d2d8713" exitCode=0 Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.748745 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-967v4" event={"ID":"219b51b6-4118-4212-94a2-48d6b2116112","Type":"ContainerDied","Data":"99c104d6a1022e7180a2310bda9b658fc011dc9b6c2725826b818d1c4d2d8713"} Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.748805 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-967v4" event={"ID":"219b51b6-4118-4212-94a2-48d6b2116112","Type":"ContainerStarted","Data":"dce444de76d13ae5e3f9ce45eca09e86bff40641cb396d2b919244ea0ecebfe9"} Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.750032 4718 patch_prober.go:28] interesting pod/console-f9d7485db-lt6zb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.750071 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-lt6zb" podUID="0893e7ff-b1d9-4227-ae44-a873d8355a70" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.751487 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.757331 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.776584 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" podStartSLOduration=12.776564049 podStartE2EDuration="12.776564049s" podCreationTimestamp="2026-01-23 16:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:06.76954828 +0000 UTC m=+147.916790281" watchObservedRunningTime="2026-01-23 16:19:06.776564049 +0000 UTC m=+147.923806040" Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.806249 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.806299 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.832973 4718 patch_prober.go:28] interesting pod/apiserver-76f77b778f-qwldq container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 23 16:19:06 crc kubenswrapper[4718]: [+]log ok Jan 23 16:19:06 crc kubenswrapper[4718]: [+]etcd ok Jan 23 16:19:06 crc kubenswrapper[4718]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 23 16:19:06 crc kubenswrapper[4718]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 16:19:06 crc kubenswrapper[4718]: [+]poststarthook/max-in-flight-filter ok Jan 23 16:19:06 crc kubenswrapper[4718]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 16:19:06 crc kubenswrapper[4718]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 23 16:19:06 crc kubenswrapper[4718]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 23 16:19:06 crc kubenswrapper[4718]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 23 16:19:06 crc kubenswrapper[4718]: [+]poststarthook/project.openshift.io-projectcache ok Jan 23 16:19:06 crc kubenswrapper[4718]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 23 16:19:06 crc kubenswrapper[4718]: [+]poststarthook/openshift.io-startinformers ok Jan 23 16:19:06 crc kubenswrapper[4718]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 23 16:19:06 crc kubenswrapper[4718]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 16:19:06 crc kubenswrapper[4718]: livez check failed Jan 23 16:19:06 crc kubenswrapper[4718]: I0123 16:19:06.833064 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-qwldq" podUID="a29dbc90-997d-4f83-8151-1cfcca661070" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.008274 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-smckr"] Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.009762 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-smckr" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.016892 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.025107 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-smckr"] Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.073917 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hmshq"] Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.124979 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbmsm\" (UniqueName: \"kubernetes.io/projected/8be4ad13-7119-48b8-9f6e-3848463eba75-kube-api-access-qbmsm\") pod \"redhat-marketplace-smckr\" (UID: \"8be4ad13-7119-48b8-9f6e-3848463eba75\") " pod="openshift-marketplace/redhat-marketplace-smckr" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.125033 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be4ad13-7119-48b8-9f6e-3848463eba75-catalog-content\") pod \"redhat-marketplace-smckr\" (UID: \"8be4ad13-7119-48b8-9f6e-3848463eba75\") " pod="openshift-marketplace/redhat-marketplace-smckr" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.125111 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be4ad13-7119-48b8-9f6e-3848463eba75-utilities\") pod \"redhat-marketplace-smckr\" (UID: \"8be4ad13-7119-48b8-9f6e-3848463eba75\") " pod="openshift-marketplace/redhat-marketplace-smckr" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.154328 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.226396 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbmsm\" (UniqueName: \"kubernetes.io/projected/8be4ad13-7119-48b8-9f6e-3848463eba75-kube-api-access-qbmsm\") pod \"redhat-marketplace-smckr\" (UID: \"8be4ad13-7119-48b8-9f6e-3848463eba75\") " pod="openshift-marketplace/redhat-marketplace-smckr" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.226437 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be4ad13-7119-48b8-9f6e-3848463eba75-catalog-content\") pod \"redhat-marketplace-smckr\" (UID: \"8be4ad13-7119-48b8-9f6e-3848463eba75\") " pod="openshift-marketplace/redhat-marketplace-smckr" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.226492 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be4ad13-7119-48b8-9f6e-3848463eba75-utilities\") pod \"redhat-marketplace-smckr\" (UID: \"8be4ad13-7119-48b8-9f6e-3848463eba75\") " pod="openshift-marketplace/redhat-marketplace-smckr" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.227255 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be4ad13-7119-48b8-9f6e-3848463eba75-utilities\") pod \"redhat-marketplace-smckr\" (UID: \"8be4ad13-7119-48b8-9f6e-3848463eba75\") " pod="openshift-marketplace/redhat-marketplace-smckr" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.227339 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be4ad13-7119-48b8-9f6e-3848463eba75-catalog-content\") pod \"redhat-marketplace-smckr\" (UID: \"8be4ad13-7119-48b8-9f6e-3848463eba75\") " pod="openshift-marketplace/redhat-marketplace-smckr" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.244578 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbmsm\" (UniqueName: \"kubernetes.io/projected/8be4ad13-7119-48b8-9f6e-3848463eba75-kube-api-access-qbmsm\") pod \"redhat-marketplace-smckr\" (UID: \"8be4ad13-7119-48b8-9f6e-3848463eba75\") " pod="openshift-marketplace/redhat-marketplace-smckr" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.332909 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-smckr" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.390467 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9j9tn"] Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.391435 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9j9tn" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.406806 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9j9tn"] Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.527870 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-smckr"] Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.530559 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2txt\" (UniqueName: \"kubernetes.io/projected/7f628e92-a5fe-4048-81c7-33b3f6a11792-kube-api-access-g2txt\") pod \"redhat-marketplace-9j9tn\" (UID: \"7f628e92-a5fe-4048-81c7-33b3f6a11792\") " pod="openshift-marketplace/redhat-marketplace-9j9tn" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.530679 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f628e92-a5fe-4048-81c7-33b3f6a11792-utilities\") pod \"redhat-marketplace-9j9tn\" (UID: \"7f628e92-a5fe-4048-81c7-33b3f6a11792\") " pod="openshift-marketplace/redhat-marketplace-9j9tn" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.530702 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f628e92-a5fe-4048-81c7-33b3f6a11792-catalog-content\") pod \"redhat-marketplace-9j9tn\" (UID: \"7f628e92-a5fe-4048-81c7-33b3f6a11792\") " pod="openshift-marketplace/redhat-marketplace-9j9tn" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.566323 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:07 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:07 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:07 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.566370 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.632786 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2txt\" (UniqueName: \"kubernetes.io/projected/7f628e92-a5fe-4048-81c7-33b3f6a11792-kube-api-access-g2txt\") pod \"redhat-marketplace-9j9tn\" (UID: \"7f628e92-a5fe-4048-81c7-33b3f6a11792\") " pod="openshift-marketplace/redhat-marketplace-9j9tn" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.633168 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f628e92-a5fe-4048-81c7-33b3f6a11792-utilities\") pod \"redhat-marketplace-9j9tn\" (UID: \"7f628e92-a5fe-4048-81c7-33b3f6a11792\") " pod="openshift-marketplace/redhat-marketplace-9j9tn" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.633190 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f628e92-a5fe-4048-81c7-33b3f6a11792-catalog-content\") pod \"redhat-marketplace-9j9tn\" (UID: \"7f628e92-a5fe-4048-81c7-33b3f6a11792\") " pod="openshift-marketplace/redhat-marketplace-9j9tn" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.633750 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f628e92-a5fe-4048-81c7-33b3f6a11792-catalog-content\") pod \"redhat-marketplace-9j9tn\" (UID: \"7f628e92-a5fe-4048-81c7-33b3f6a11792\") " pod="openshift-marketplace/redhat-marketplace-9j9tn" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.634015 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f628e92-a5fe-4048-81c7-33b3f6a11792-utilities\") pod \"redhat-marketplace-9j9tn\" (UID: \"7f628e92-a5fe-4048-81c7-33b3f6a11792\") " pod="openshift-marketplace/redhat-marketplace-9j9tn" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.654186 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2txt\" (UniqueName: \"kubernetes.io/projected/7f628e92-a5fe-4048-81c7-33b3f6a11792-kube-api-access-g2txt\") pod \"redhat-marketplace-9j9tn\" (UID: \"7f628e92-a5fe-4048-81c7-33b3f6a11792\") " pod="openshift-marketplace/redhat-marketplace-9j9tn" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.713576 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9j9tn" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.757838 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" event={"ID":"366c0aee-b870-49b2-8500-06f6529c270c","Type":"ContainerStarted","Data":"71ee9c0ccb8903dce1240475cbad87a4576f47f1bf575288aa1bbe04b2bffd8c"} Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.757885 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" event={"ID":"366c0aee-b870-49b2-8500-06f6529c270c","Type":"ContainerStarted","Data":"17bcaa06b625a0504917524cea3fa8812bf1ca3ad92711792bfcc19a61cc985b"} Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.757963 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.776213 4718 generic.go:334] "Generic (PLEG): container finished" podID="4b400da1-43ec-45d6-87ee-3baea6d9f22d" containerID="fdd9b5d0a8b187cefea3ea357228ba003b66175fec1f099ce5f92311c68ac477" exitCode=0 Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.776843 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7jnq" event={"ID":"4b400da1-43ec-45d6-87ee-3baea6d9f22d","Type":"ContainerDied","Data":"fdd9b5d0a8b187cefea3ea357228ba003b66175fec1f099ce5f92311c68ac477"} Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.782614 4718 patch_prober.go:28] interesting pod/downloads-7954f5f757-rlqmc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.782672 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rlqmc" podUID="14019d5a-595b-46f5-98f0-bed12ff9ab9f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.782768 4718 patch_prober.go:28] interesting pod/downloads-7954f5f757-rlqmc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.782871 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rlqmc" podUID="14019d5a-595b-46f5-98f0-bed12ff9ab9f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.805031 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" podStartSLOduration=128.805005841 podStartE2EDuration="2m8.805005841s" podCreationTimestamp="2026-01-23 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:07.7807199 +0000 UTC m=+148.927961891" watchObservedRunningTime="2026-01-23 16:19:07.805005841 +0000 UTC m=+148.952247832" Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.806135 4718 generic.go:334] "Generic (PLEG): container finished" podID="b8bc3317-7018-4527-a8f8-7d27072bd326" containerID="bbb3011c2c51eccf1fa35e966fa2b62c6be2398fdb1820fac3d3aa491e2638e8" exitCode=0 Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.806221 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdbp8" event={"ID":"b8bc3317-7018-4527-a8f8-7d27072bd326","Type":"ContainerDied","Data":"bbb3011c2c51eccf1fa35e966fa2b62c6be2398fdb1820fac3d3aa491e2638e8"} Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.813302 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smckr" event={"ID":"8be4ad13-7119-48b8-9f6e-3848463eba75","Type":"ContainerStarted","Data":"10e380cefe8297c39e15078cc21997801f73be43233dc6b3d2363717ed53eb06"} Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.827505 4718 generic.go:334] "Generic (PLEG): container finished" podID="dace0865-fb1e-42df-b857-85285c561bb8" containerID="705edfdf79362a05af397d481334409f9ceb90733aa126df078a96effb4fcfc1" exitCode=0 Jan 23 16:19:07 crc kubenswrapper[4718]: I0123 16:19:07.827892 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xdfb" event={"ID":"dace0865-fb1e-42df-b857-85285c561bb8","Type":"ContainerDied","Data":"705edfdf79362a05af397d481334409f9ceb90733aa126df078a96effb4fcfc1"} Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.099497 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9j9tn"] Jan 23 16:19:08 crc kubenswrapper[4718]: W0123 16:19:08.132958 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f628e92_a5fe_4048_81c7_33b3f6a11792.slice/crio-d651ad5f963bb909f784b8f790d15e4f229ead69d9e0ccdfce54415ec34dc98e WatchSource:0}: Error finding container d651ad5f963bb909f784b8f790d15e4f229ead69d9e0ccdfce54415ec34dc98e: Status 404 returned error can't find the container with id d651ad5f963bb909f784b8f790d15e4f229ead69d9e0ccdfce54415ec34dc98e Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.205449 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jq6kd"] Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.206480 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jq6kd" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.208871 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.242547 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jq6kd"] Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.244537 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.255188 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp66p\" (UniqueName: \"kubernetes.io/projected/d44f3dd6-7295-4fb3-b29b-78dac567ffff-kube-api-access-pp66p\") pod \"redhat-operators-jq6kd\" (UID: \"d44f3dd6-7295-4fb3-b29b-78dac567ffff\") " pod="openshift-marketplace/redhat-operators-jq6kd" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.255229 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d44f3dd6-7295-4fb3-b29b-78dac567ffff-catalog-content\") pod \"redhat-operators-jq6kd\" (UID: \"d44f3dd6-7295-4fb3-b29b-78dac567ffff\") " pod="openshift-marketplace/redhat-operators-jq6kd" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.255263 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d44f3dd6-7295-4fb3-b29b-78dac567ffff-utilities\") pod \"redhat-operators-jq6kd\" (UID: \"d44f3dd6-7295-4fb3-b29b-78dac567ffff\") " pod="openshift-marketplace/redhat-operators-jq6kd" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.356167 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sg5q\" (UniqueName: \"kubernetes.io/projected/74655295-4c96-4870-b700-b98b7a1e176e-kube-api-access-4sg5q\") pod \"74655295-4c96-4870-b700-b98b7a1e176e\" (UID: \"74655295-4c96-4870-b700-b98b7a1e176e\") " Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.356259 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74655295-4c96-4870-b700-b98b7a1e176e-config-volume\") pod \"74655295-4c96-4870-b700-b98b7a1e176e\" (UID: \"74655295-4c96-4870-b700-b98b7a1e176e\") " Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.356284 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74655295-4c96-4870-b700-b98b7a1e176e-secret-volume\") pod \"74655295-4c96-4870-b700-b98b7a1e176e\" (UID: \"74655295-4c96-4870-b700-b98b7a1e176e\") " Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.356455 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp66p\" (UniqueName: \"kubernetes.io/projected/d44f3dd6-7295-4fb3-b29b-78dac567ffff-kube-api-access-pp66p\") pod \"redhat-operators-jq6kd\" (UID: \"d44f3dd6-7295-4fb3-b29b-78dac567ffff\") " pod="openshift-marketplace/redhat-operators-jq6kd" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.356479 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d44f3dd6-7295-4fb3-b29b-78dac567ffff-catalog-content\") pod \"redhat-operators-jq6kd\" (UID: \"d44f3dd6-7295-4fb3-b29b-78dac567ffff\") " pod="openshift-marketplace/redhat-operators-jq6kd" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.356510 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d44f3dd6-7295-4fb3-b29b-78dac567ffff-utilities\") pod \"redhat-operators-jq6kd\" (UID: \"d44f3dd6-7295-4fb3-b29b-78dac567ffff\") " pod="openshift-marketplace/redhat-operators-jq6kd" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.356941 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d44f3dd6-7295-4fb3-b29b-78dac567ffff-utilities\") pod \"redhat-operators-jq6kd\" (UID: \"d44f3dd6-7295-4fb3-b29b-78dac567ffff\") " pod="openshift-marketplace/redhat-operators-jq6kd" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.364100 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74655295-4c96-4870-b700-b98b7a1e176e-config-volume" (OuterVolumeSpecName: "config-volume") pod "74655295-4c96-4870-b700-b98b7a1e176e" (UID: "74655295-4c96-4870-b700-b98b7a1e176e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.364422 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d44f3dd6-7295-4fb3-b29b-78dac567ffff-catalog-content\") pod \"redhat-operators-jq6kd\" (UID: \"d44f3dd6-7295-4fb3-b29b-78dac567ffff\") " pod="openshift-marketplace/redhat-operators-jq6kd" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.371135 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74655295-4c96-4870-b700-b98b7a1e176e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "74655295-4c96-4870-b700-b98b7a1e176e" (UID: "74655295-4c96-4870-b700-b98b7a1e176e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.371490 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74655295-4c96-4870-b700-b98b7a1e176e-kube-api-access-4sg5q" (OuterVolumeSpecName: "kube-api-access-4sg5q") pod "74655295-4c96-4870-b700-b98b7a1e176e" (UID: "74655295-4c96-4870-b700-b98b7a1e176e"). InnerVolumeSpecName "kube-api-access-4sg5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.391712 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp66p\" (UniqueName: \"kubernetes.io/projected/d44f3dd6-7295-4fb3-b29b-78dac567ffff-kube-api-access-pp66p\") pod \"redhat-operators-jq6kd\" (UID: \"d44f3dd6-7295-4fb3-b29b-78dac567ffff\") " pod="openshift-marketplace/redhat-operators-jq6kd" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.457848 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74655295-4c96-4870-b700-b98b7a1e176e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.458276 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74655295-4c96-4870-b700-b98b7a1e176e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.458289 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sg5q\" (UniqueName: \"kubernetes.io/projected/74655295-4c96-4870-b700-b98b7a1e176e-kube-api-access-4sg5q\") on node \"crc\" DevicePath \"\"" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.554872 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jq6kd" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.562260 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.567088 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:08 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:08 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:08 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.567143 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.592050 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9mxjz"] Jan 23 16:19:08 crc kubenswrapper[4718]: E0123 16:19:08.592257 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74655295-4c96-4870-b700-b98b7a1e176e" containerName="collect-profiles" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.592268 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="74655295-4c96-4870-b700-b98b7a1e176e" containerName="collect-profiles" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.592353 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="74655295-4c96-4870-b700-b98b7a1e176e" containerName="collect-profiles" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.593078 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9mxjz" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.602080 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9mxjz"] Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.661535 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abdcb612-10fc-49af-97ef-a8265d36bb9d-utilities\") pod \"redhat-operators-9mxjz\" (UID: \"abdcb612-10fc-49af-97ef-a8265d36bb9d\") " pod="openshift-marketplace/redhat-operators-9mxjz" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.661676 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abdcb612-10fc-49af-97ef-a8265d36bb9d-catalog-content\") pod \"redhat-operators-9mxjz\" (UID: \"abdcb612-10fc-49af-97ef-a8265d36bb9d\") " pod="openshift-marketplace/redhat-operators-9mxjz" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.661720 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv7wc\" (UniqueName: \"kubernetes.io/projected/abdcb612-10fc-49af-97ef-a8265d36bb9d-kube-api-access-sv7wc\") pod \"redhat-operators-9mxjz\" (UID: \"abdcb612-10fc-49af-97ef-a8265d36bb9d\") " pod="openshift-marketplace/redhat-operators-9mxjz" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.763071 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abdcb612-10fc-49af-97ef-a8265d36bb9d-catalog-content\") pod \"redhat-operators-9mxjz\" (UID: \"abdcb612-10fc-49af-97ef-a8265d36bb9d\") " pod="openshift-marketplace/redhat-operators-9mxjz" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.763129 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv7wc\" (UniqueName: \"kubernetes.io/projected/abdcb612-10fc-49af-97ef-a8265d36bb9d-kube-api-access-sv7wc\") pod \"redhat-operators-9mxjz\" (UID: \"abdcb612-10fc-49af-97ef-a8265d36bb9d\") " pod="openshift-marketplace/redhat-operators-9mxjz" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.763227 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abdcb612-10fc-49af-97ef-a8265d36bb9d-utilities\") pod \"redhat-operators-9mxjz\" (UID: \"abdcb612-10fc-49af-97ef-a8265d36bb9d\") " pod="openshift-marketplace/redhat-operators-9mxjz" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.768998 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abdcb612-10fc-49af-97ef-a8265d36bb9d-utilities\") pod \"redhat-operators-9mxjz\" (UID: \"abdcb612-10fc-49af-97ef-a8265d36bb9d\") " pod="openshift-marketplace/redhat-operators-9mxjz" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.769025 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abdcb612-10fc-49af-97ef-a8265d36bb9d-catalog-content\") pod \"redhat-operators-9mxjz\" (UID: \"abdcb612-10fc-49af-97ef-a8265d36bb9d\") " pod="openshift-marketplace/redhat-operators-9mxjz" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.792550 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv7wc\" (UniqueName: \"kubernetes.io/projected/abdcb612-10fc-49af-97ef-a8265d36bb9d-kube-api-access-sv7wc\") pod \"redhat-operators-9mxjz\" (UID: \"abdcb612-10fc-49af-97ef-a8265d36bb9d\") " pod="openshift-marketplace/redhat-operators-9mxjz" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.837470 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.837468 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c" event={"ID":"74655295-4c96-4870-b700-b98b7a1e176e","Type":"ContainerDied","Data":"ce8fd23e73f585af961ef89af02f38319d9a7dabb7c27b67c3e793d65b8af42e"} Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.837572 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce8fd23e73f585af961ef89af02f38319d9a7dabb7c27b67c3e793d65b8af42e" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.850328 4718 generic.go:334] "Generic (PLEG): container finished" podID="8be4ad13-7119-48b8-9f6e-3848463eba75" containerID="ada1b7d3757c72f04e91bf4c862a980b9b4b57b8cc9b194a61da0cd9b8cc1848" exitCode=0 Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.851089 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smckr" event={"ID":"8be4ad13-7119-48b8-9f6e-3848463eba75","Type":"ContainerDied","Data":"ada1b7d3757c72f04e91bf4c862a980b9b4b57b8cc9b194a61da0cd9b8cc1848"} Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.877427 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9j9tn" event={"ID":"7f628e92-a5fe-4048-81c7-33b3f6a11792","Type":"ContainerDied","Data":"59ca1726187d03c52b3d0524933dcd6f068a92665167ad6726d079797f00a9bc"} Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.877769 4718 generic.go:334] "Generic (PLEG): container finished" podID="7f628e92-a5fe-4048-81c7-33b3f6a11792" containerID="59ca1726187d03c52b3d0524933dcd6f068a92665167ad6726d079797f00a9bc" exitCode=0 Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.878183 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9j9tn" event={"ID":"7f628e92-a5fe-4048-81c7-33b3f6a11792","Type":"ContainerStarted","Data":"d651ad5f963bb909f784b8f790d15e4f229ead69d9e0ccdfce54415ec34dc98e"} Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.920032 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9mxjz" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.922531 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.928258 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.933502 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.940989 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 23 16:19:08 crc kubenswrapper[4718]: I0123 16:19:08.941273 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.067075 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03258925-7bc6-4a58-8bdc-8ec9a22ba540-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"03258925-7bc6-4a58-8bdc-8ec9a22ba540\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.067137 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03258925-7bc6-4a58-8bdc-8ec9a22ba540-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"03258925-7bc6-4a58-8bdc-8ec9a22ba540\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.068221 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jq6kd"] Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.172310 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03258925-7bc6-4a58-8bdc-8ec9a22ba540-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"03258925-7bc6-4a58-8bdc-8ec9a22ba540\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.172367 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03258925-7bc6-4a58-8bdc-8ec9a22ba540-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"03258925-7bc6-4a58-8bdc-8ec9a22ba540\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.173397 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03258925-7bc6-4a58-8bdc-8ec9a22ba540-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"03258925-7bc6-4a58-8bdc-8ec9a22ba540\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.216274 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03258925-7bc6-4a58-8bdc-8ec9a22ba540-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"03258925-7bc6-4a58-8bdc-8ec9a22ba540\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.277747 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.318509 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9mxjz"] Jan 23 16:19:09 crc kubenswrapper[4718]: W0123 16:19:09.365841 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabdcb612_10fc_49af_97ef_a8265d36bb9d.slice/crio-9834edbb405abf45ddaed07c4a148c8c1bce8e8455487613dd34c46ec489a76e WatchSource:0}: Error finding container 9834edbb405abf45ddaed07c4a148c8c1bce8e8455487613dd34c46ec489a76e: Status 404 returned error can't find the container with id 9834edbb405abf45ddaed07c4a148c8c1bce8e8455487613dd34c46ec489a76e Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.566273 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:09 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:09 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:09 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.566334 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.620915 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 16:19:09 crc kubenswrapper[4718]: W0123 16:19:09.758997 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod03258925_7bc6_4a58_8bdc_8ec9a22ba540.slice/crio-dae95e2c4ed194d59eb31d8996fff0818c474d48ae12e7590be56a5370d4cb48 WatchSource:0}: Error finding container dae95e2c4ed194d59eb31d8996fff0818c474d48ae12e7590be56a5370d4cb48: Status 404 returned error can't find the container with id dae95e2c4ed194d59eb31d8996fff0818c474d48ae12e7590be56a5370d4cb48 Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.889607 4718 generic.go:334] "Generic (PLEG): container finished" podID="abdcb612-10fc-49af-97ef-a8265d36bb9d" containerID="fe898d04d70516a227962dec33fbac484226b86b240161dc3ab2121511f7ab5c" exitCode=0 Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.889722 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mxjz" event={"ID":"abdcb612-10fc-49af-97ef-a8265d36bb9d","Type":"ContainerDied","Data":"fe898d04d70516a227962dec33fbac484226b86b240161dc3ab2121511f7ab5c"} Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.889765 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mxjz" event={"ID":"abdcb612-10fc-49af-97ef-a8265d36bb9d","Type":"ContainerStarted","Data":"9834edbb405abf45ddaed07c4a148c8c1bce8e8455487613dd34c46ec489a76e"} Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.896195 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"03258925-7bc6-4a58-8bdc-8ec9a22ba540","Type":"ContainerStarted","Data":"dae95e2c4ed194d59eb31d8996fff0818c474d48ae12e7590be56a5370d4cb48"} Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.923607 4718 generic.go:334] "Generic (PLEG): container finished" podID="d44f3dd6-7295-4fb3-b29b-78dac567ffff" containerID="25d65aad6c379df328e014f2bb7861fbcdac21fed6f5302be71c1a6287e025b1" exitCode=0 Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.923670 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq6kd" event={"ID":"d44f3dd6-7295-4fb3-b29b-78dac567ffff","Type":"ContainerDied","Data":"25d65aad6c379df328e014f2bb7861fbcdac21fed6f5302be71c1a6287e025b1"} Jan 23 16:19:09 crc kubenswrapper[4718]: I0123 16:19:09.923701 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq6kd" event={"ID":"d44f3dd6-7295-4fb3-b29b-78dac567ffff","Type":"ContainerStarted","Data":"463ffc4ced6371d6be9bb1a2227747f93ed96b0ebbd94f163cbfe86b5103bb68"} Jan 23 16:19:10 crc kubenswrapper[4718]: I0123 16:19:10.574873 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:10 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:10 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:10 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:10 crc kubenswrapper[4718]: I0123 16:19:10.575364 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:10 crc kubenswrapper[4718]: I0123 16:19:10.933608 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"03258925-7bc6-4a58-8bdc-8ec9a22ba540","Type":"ContainerStarted","Data":"ccbb9fdb47097e93a375c0787e28683b685079ca746273023a6c76a9b5065444"} Jan 23 16:19:11 crc kubenswrapper[4718]: I0123 16:19:11.565164 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:11 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:11 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:11 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:11 crc kubenswrapper[4718]: I0123 16:19:11.565244 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:11 crc kubenswrapper[4718]: I0123 16:19:11.814163 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:19:11 crc kubenswrapper[4718]: I0123 16:19:11.820050 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-qwldq" Jan 23 16:19:11 crc kubenswrapper[4718]: I0123 16:19:11.897614 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.897594936 podStartE2EDuration="3.897594936s" podCreationTimestamp="2026-01-23 16:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:10.967671109 +0000 UTC m=+152.114913100" watchObservedRunningTime="2026-01-23 16:19:11.897594936 +0000 UTC m=+153.044836927" Jan 23 16:19:11 crc kubenswrapper[4718]: I0123 16:19:11.970759 4718 generic.go:334] "Generic (PLEG): container finished" podID="03258925-7bc6-4a58-8bdc-8ec9a22ba540" containerID="ccbb9fdb47097e93a375c0787e28683b685079ca746273023a6c76a9b5065444" exitCode=0 Jan 23 16:19:11 crc kubenswrapper[4718]: I0123 16:19:11.971494 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"03258925-7bc6-4a58-8bdc-8ec9a22ba540","Type":"ContainerDied","Data":"ccbb9fdb47097e93a375c0787e28683b685079ca746273023a6c76a9b5065444"} Jan 23 16:19:12 crc kubenswrapper[4718]: I0123 16:19:12.539207 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 16:19:12 crc kubenswrapper[4718]: I0123 16:19:12.550920 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:19:12 crc kubenswrapper[4718]: I0123 16:19:12.555185 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 16:19:12 crc kubenswrapper[4718]: I0123 16:19:12.555360 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 16:19:12 crc kubenswrapper[4718]: I0123 16:19:12.557358 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 16:19:12 crc kubenswrapper[4718]: I0123 16:19:12.592383 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:12 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:12 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:12 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:12 crc kubenswrapper[4718]: I0123 16:19:12.592434 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:12 crc kubenswrapper[4718]: I0123 16:19:12.652285 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7157877-42e0-48c3-bfc5-025a2b8586d5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c7157877-42e0-48c3-bfc5-025a2b8586d5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:19:12 crc kubenswrapper[4718]: I0123 16:19:12.652808 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7157877-42e0-48c3-bfc5-025a2b8586d5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c7157877-42e0-48c3-bfc5-025a2b8586d5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:19:12 crc kubenswrapper[4718]: I0123 16:19:12.754455 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7157877-42e0-48c3-bfc5-025a2b8586d5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c7157877-42e0-48c3-bfc5-025a2b8586d5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:19:12 crc kubenswrapper[4718]: I0123 16:19:12.754579 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7157877-42e0-48c3-bfc5-025a2b8586d5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c7157877-42e0-48c3-bfc5-025a2b8586d5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:19:12 crc kubenswrapper[4718]: I0123 16:19:12.754795 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7157877-42e0-48c3-bfc5-025a2b8586d5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c7157877-42e0-48c3-bfc5-025a2b8586d5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:19:12 crc kubenswrapper[4718]: I0123 16:19:12.779951 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7157877-42e0-48c3-bfc5-025a2b8586d5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c7157877-42e0-48c3-bfc5-025a2b8586d5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:19:12 crc kubenswrapper[4718]: I0123 16:19:12.853127 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-n58ck" Jan 23 16:19:12 crc kubenswrapper[4718]: I0123 16:19:12.922364 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:19:13 crc kubenswrapper[4718]: I0123 16:19:13.304937 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:19:13 crc kubenswrapper[4718]: I0123 16:19:13.370725 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03258925-7bc6-4a58-8bdc-8ec9a22ba540-kubelet-dir\") pod \"03258925-7bc6-4a58-8bdc-8ec9a22ba540\" (UID: \"03258925-7bc6-4a58-8bdc-8ec9a22ba540\") " Jan 23 16:19:13 crc kubenswrapper[4718]: I0123 16:19:13.370731 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03258925-7bc6-4a58-8bdc-8ec9a22ba540-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "03258925-7bc6-4a58-8bdc-8ec9a22ba540" (UID: "03258925-7bc6-4a58-8bdc-8ec9a22ba540"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:19:13 crc kubenswrapper[4718]: I0123 16:19:13.371106 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03258925-7bc6-4a58-8bdc-8ec9a22ba540-kube-api-access\") pod \"03258925-7bc6-4a58-8bdc-8ec9a22ba540\" (UID: \"03258925-7bc6-4a58-8bdc-8ec9a22ba540\") " Jan 23 16:19:13 crc kubenswrapper[4718]: I0123 16:19:13.371490 4718 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03258925-7bc6-4a58-8bdc-8ec9a22ba540-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:19:13 crc kubenswrapper[4718]: I0123 16:19:13.380483 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03258925-7bc6-4a58-8bdc-8ec9a22ba540-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "03258925-7bc6-4a58-8bdc-8ec9a22ba540" (UID: "03258925-7bc6-4a58-8bdc-8ec9a22ba540"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:19:13 crc kubenswrapper[4718]: I0123 16:19:13.472620 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03258925-7bc6-4a58-8bdc-8ec9a22ba540-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 16:19:13 crc kubenswrapper[4718]: I0123 16:19:13.541739 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 16:19:13 crc kubenswrapper[4718]: W0123 16:19:13.549280 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc7157877_42e0_48c3_bfc5_025a2b8586d5.slice/crio-b992e59f115d78d1b31b7661c9caf1077ef2ac532d9d61cc54f15a79bba43991 WatchSource:0}: Error finding container b992e59f115d78d1b31b7661c9caf1077ef2ac532d9d61cc54f15a79bba43991: Status 404 returned error can't find the container with id b992e59f115d78d1b31b7661c9caf1077ef2ac532d9d61cc54f15a79bba43991 Jan 23 16:19:13 crc kubenswrapper[4718]: I0123 16:19:13.566578 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:13 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:13 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:13 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:13 crc kubenswrapper[4718]: I0123 16:19:13.566662 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:13 crc kubenswrapper[4718]: I0123 16:19:13.989524 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c7157877-42e0-48c3-bfc5-025a2b8586d5","Type":"ContainerStarted","Data":"b992e59f115d78d1b31b7661c9caf1077ef2ac532d9d61cc54f15a79bba43991"} Jan 23 16:19:13 crc kubenswrapper[4718]: I0123 16:19:13.992478 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"03258925-7bc6-4a58-8bdc-8ec9a22ba540","Type":"ContainerDied","Data":"dae95e2c4ed194d59eb31d8996fff0818c474d48ae12e7590be56a5370d4cb48"} Jan 23 16:19:13 crc kubenswrapper[4718]: I0123 16:19:13.992507 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dae95e2c4ed194d59eb31d8996fff0818c474d48ae12e7590be56a5370d4cb48" Jan 23 16:19:13 crc kubenswrapper[4718]: I0123 16:19:13.992675 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:19:14 crc kubenswrapper[4718]: I0123 16:19:14.566047 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:14 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:14 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:14 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:14 crc kubenswrapper[4718]: I0123 16:19:14.566112 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:15 crc kubenswrapper[4718]: I0123 16:19:15.001449 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c7157877-42e0-48c3-bfc5-025a2b8586d5","Type":"ContainerStarted","Data":"3eeeca79d892a24474cdf569e16f2a971afeb02cc2bcb07cf4ae53ec418be705"} Jan 23 16:19:15 crc kubenswrapper[4718]: I0123 16:19:15.021228 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.021199678 podStartE2EDuration="3.021199678s" podCreationTimestamp="2026-01-23 16:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:15.017291638 +0000 UTC m=+156.164533649" watchObservedRunningTime="2026-01-23 16:19:15.021199678 +0000 UTC m=+156.168441679" Jan 23 16:19:15 crc kubenswrapper[4718]: I0123 16:19:15.572000 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:15 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:15 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:15 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:15 crc kubenswrapper[4718]: I0123 16:19:15.572087 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:16 crc kubenswrapper[4718]: I0123 16:19:16.031928 4718 generic.go:334] "Generic (PLEG): container finished" podID="c7157877-42e0-48c3-bfc5-025a2b8586d5" containerID="3eeeca79d892a24474cdf569e16f2a971afeb02cc2bcb07cf4ae53ec418be705" exitCode=0 Jan 23 16:19:16 crc kubenswrapper[4718]: I0123 16:19:16.031996 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c7157877-42e0-48c3-bfc5-025a2b8586d5","Type":"ContainerDied","Data":"3eeeca79d892a24474cdf569e16f2a971afeb02cc2bcb07cf4ae53ec418be705"} Jan 23 16:19:16 crc kubenswrapper[4718]: I0123 16:19:16.563682 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:16 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:16 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:16 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:16 crc kubenswrapper[4718]: I0123 16:19:16.563748 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:16 crc kubenswrapper[4718]: I0123 16:19:16.743463 4718 patch_prober.go:28] interesting pod/console-f9d7485db-lt6zb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 23 16:19:16 crc kubenswrapper[4718]: I0123 16:19:16.743519 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-lt6zb" podUID="0893e7ff-b1d9-4227-ae44-a873d8355a70" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 23 16:19:17 crc kubenswrapper[4718]: I0123 16:19:17.565828 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:17 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:17 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:17 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:17 crc kubenswrapper[4718]: I0123 16:19:17.566262 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:17 crc kubenswrapper[4718]: I0123 16:19:17.805495 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-rlqmc" Jan 23 16:19:18 crc kubenswrapper[4718]: I0123 16:19:18.566825 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:18 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:18 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:18 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:18 crc kubenswrapper[4718]: I0123 16:19:18.567070 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:19 crc kubenswrapper[4718]: I0123 16:19:19.565680 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:19 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:19 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:19 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:19 crc kubenswrapper[4718]: I0123 16:19:19.565776 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:20 crc kubenswrapper[4718]: I0123 16:19:20.571180 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:20 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:20 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:20 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:20 crc kubenswrapper[4718]: I0123 16:19:20.571607 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:21 crc kubenswrapper[4718]: I0123 16:19:21.311335 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs\") pod \"network-metrics-daemon-dppxp\" (UID: \"593a4237-c13e-4403-b139-f32b552ca770\") " pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:19:21 crc kubenswrapper[4718]: I0123 16:19:21.318435 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/593a4237-c13e-4403-b139-f32b552ca770-metrics-certs\") pod \"network-metrics-daemon-dppxp\" (UID: \"593a4237-c13e-4403-b139-f32b552ca770\") " pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:19:21 crc kubenswrapper[4718]: I0123 16:19:21.366344 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dppxp" Jan 23 16:19:21 crc kubenswrapper[4718]: I0123 16:19:21.422981 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:19:21 crc kubenswrapper[4718]: I0123 16:19:21.514422 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7157877-42e0-48c3-bfc5-025a2b8586d5-kubelet-dir\") pod \"c7157877-42e0-48c3-bfc5-025a2b8586d5\" (UID: \"c7157877-42e0-48c3-bfc5-025a2b8586d5\") " Jan 23 16:19:21 crc kubenswrapper[4718]: I0123 16:19:21.514541 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7157877-42e0-48c3-bfc5-025a2b8586d5-kube-api-access\") pod \"c7157877-42e0-48c3-bfc5-025a2b8586d5\" (UID: \"c7157877-42e0-48c3-bfc5-025a2b8586d5\") " Jan 23 16:19:21 crc kubenswrapper[4718]: I0123 16:19:21.514580 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7157877-42e0-48c3-bfc5-025a2b8586d5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c7157877-42e0-48c3-bfc5-025a2b8586d5" (UID: "c7157877-42e0-48c3-bfc5-025a2b8586d5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:19:21 crc kubenswrapper[4718]: I0123 16:19:21.514823 4718 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7157877-42e0-48c3-bfc5-025a2b8586d5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:19:21 crc kubenswrapper[4718]: I0123 16:19:21.520197 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7157877-42e0-48c3-bfc5-025a2b8586d5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c7157877-42e0-48c3-bfc5-025a2b8586d5" (UID: "c7157877-42e0-48c3-bfc5-025a2b8586d5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:19:21 crc kubenswrapper[4718]: I0123 16:19:21.566353 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:19:21 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Jan 23 16:19:21 crc kubenswrapper[4718]: [+]process-running ok Jan 23 16:19:21 crc kubenswrapper[4718]: healthz check failed Jan 23 16:19:21 crc kubenswrapper[4718]: I0123 16:19:21.566415 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:19:21 crc kubenswrapper[4718]: I0123 16:19:21.616471 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7157877-42e0-48c3-bfc5-025a2b8586d5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 16:19:22 crc kubenswrapper[4718]: I0123 16:19:22.084929 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c7157877-42e0-48c3-bfc5-025a2b8586d5","Type":"ContainerDied","Data":"b992e59f115d78d1b31b7661c9caf1077ef2ac532d9d61cc54f15a79bba43991"} Jan 23 16:19:22 crc kubenswrapper[4718]: I0123 16:19:22.085004 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b992e59f115d78d1b31b7661c9caf1077ef2ac532d9d61cc54f15a79bba43991" Jan 23 16:19:22 crc kubenswrapper[4718]: I0123 16:19:22.085035 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:19:22 crc kubenswrapper[4718]: I0123 16:19:22.565738 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:19:22 crc kubenswrapper[4718]: I0123 16:19:22.568301 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hzrpf" Jan 23 16:19:26 crc kubenswrapper[4718]: I0123 16:19:26.675231 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:19:26 crc kubenswrapper[4718]: I0123 16:19:26.747281 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:19:26 crc kubenswrapper[4718]: I0123 16:19:26.750999 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:19:28 crc kubenswrapper[4718]: I0123 16:19:28.876035 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:19:28 crc kubenswrapper[4718]: I0123 16:19:28.876162 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:19:37 crc kubenswrapper[4718]: I0123 16:19:37.680250 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bnnc" Jan 23 16:19:42 crc kubenswrapper[4718]: E0123 16:19:42.239484 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 16:19:42 crc kubenswrapper[4718]: E0123 16:19:42.240654 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbmsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-smckr_openshift-marketplace(8be4ad13-7119-48b8-9f6e-3848463eba75): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 16:19:42 crc kubenswrapper[4718]: E0123 16:19:42.241963 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-smckr" podUID="8be4ad13-7119-48b8-9f6e-3848463eba75" Jan 23 16:19:42 crc kubenswrapper[4718]: E0123 16:19:42.921571 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 16:19:42 crc kubenswrapper[4718]: E0123 16:19:42.921857 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dt8th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-p7jnq_openshift-marketplace(4b400da1-43ec-45d6-87ee-3baea6d9f22d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 16:19:42 crc kubenswrapper[4718]: E0123 16:19:42.923252 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-p7jnq" podUID="4b400da1-43ec-45d6-87ee-3baea6d9f22d" Jan 23 16:19:44 crc kubenswrapper[4718]: E0123 16:19:44.338404 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-p7jnq" podUID="4b400da1-43ec-45d6-87ee-3baea6d9f22d" Jan 23 16:19:44 crc kubenswrapper[4718]: E0123 16:19:44.338945 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-smckr" podUID="8be4ad13-7119-48b8-9f6e-3848463eba75" Jan 23 16:19:44 crc kubenswrapper[4718]: E0123 16:19:44.405548 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 16:19:44 crc kubenswrapper[4718]: E0123 16:19:44.406063 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tng7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-sdbp8_openshift-marketplace(b8bc3317-7018-4527-a8f8-7d27072bd326): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 16:19:44 crc kubenswrapper[4718]: E0123 16:19:44.407259 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-sdbp8" podUID="b8bc3317-7018-4527-a8f8-7d27072bd326" Jan 23 16:19:44 crc kubenswrapper[4718]: E0123 16:19:44.423123 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 16:19:44 crc kubenswrapper[4718]: E0123 16:19:44.423271 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frg4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-9xdfb_openshift-marketplace(dace0865-fb1e-42df-b857-85285c561bb8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 16:19:44 crc kubenswrapper[4718]: E0123 16:19:44.425791 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-9xdfb" podUID="dace0865-fb1e-42df-b857-85285c561bb8" Jan 23 16:19:45 crc kubenswrapper[4718]: I0123 16:19:45.275992 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:19:45 crc kubenswrapper[4718]: I0123 16:19:45.752679 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 16:19:45 crc kubenswrapper[4718]: E0123 16:19:45.753569 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7157877-42e0-48c3-bfc5-025a2b8586d5" containerName="pruner" Jan 23 16:19:45 crc kubenswrapper[4718]: I0123 16:19:45.753600 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7157877-42e0-48c3-bfc5-025a2b8586d5" containerName="pruner" Jan 23 16:19:45 crc kubenswrapper[4718]: E0123 16:19:45.753614 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03258925-7bc6-4a58-8bdc-8ec9a22ba540" containerName="pruner" Jan 23 16:19:45 crc kubenswrapper[4718]: I0123 16:19:45.753623 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="03258925-7bc6-4a58-8bdc-8ec9a22ba540" containerName="pruner" Jan 23 16:19:45 crc kubenswrapper[4718]: I0123 16:19:45.753779 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="03258925-7bc6-4a58-8bdc-8ec9a22ba540" containerName="pruner" Jan 23 16:19:45 crc kubenswrapper[4718]: I0123 16:19:45.753817 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7157877-42e0-48c3-bfc5-025a2b8586d5" containerName="pruner" Jan 23 16:19:45 crc kubenswrapper[4718]: I0123 16:19:45.754320 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:19:45 crc kubenswrapper[4718]: I0123 16:19:45.756261 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 16:19:45 crc kubenswrapper[4718]: I0123 16:19:45.757061 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 16:19:45 crc kubenswrapper[4718]: I0123 16:19:45.757132 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 16:19:45 crc kubenswrapper[4718]: I0123 16:19:45.865919 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1245184-0423-4328-aace-30c8c7d03989-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e1245184-0423-4328-aace-30c8c7d03989\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:19:45 crc kubenswrapper[4718]: I0123 16:19:45.866359 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1245184-0423-4328-aace-30c8c7d03989-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e1245184-0423-4328-aace-30c8c7d03989\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:19:45 crc kubenswrapper[4718]: I0123 16:19:45.968287 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1245184-0423-4328-aace-30c8c7d03989-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e1245184-0423-4328-aace-30c8c7d03989\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:19:45 crc kubenswrapper[4718]: I0123 16:19:45.968367 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1245184-0423-4328-aace-30c8c7d03989-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e1245184-0423-4328-aace-30c8c7d03989\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:19:45 crc kubenswrapper[4718]: I0123 16:19:45.968480 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1245184-0423-4328-aace-30c8c7d03989-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e1245184-0423-4328-aace-30c8c7d03989\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:19:45 crc kubenswrapper[4718]: I0123 16:19:45.991551 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1245184-0423-4328-aace-30c8c7d03989-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e1245184-0423-4328-aace-30c8c7d03989\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:19:46 crc kubenswrapper[4718]: I0123 16:19:46.081898 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:19:47 crc kubenswrapper[4718]: E0123 16:19:47.703804 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-9xdfb" podUID="dace0865-fb1e-42df-b857-85285c561bb8" Jan 23 16:19:47 crc kubenswrapper[4718]: E0123 16:19:47.703829 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-sdbp8" podUID="b8bc3317-7018-4527-a8f8-7d27072bd326" Jan 23 16:19:47 crc kubenswrapper[4718]: E0123 16:19:47.806403 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 16:19:47 crc kubenswrapper[4718]: E0123 16:19:47.806554 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chvdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-967v4_openshift-marketplace(219b51b6-4118-4212-94a2-48d6b2116112): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 16:19:47 crc kubenswrapper[4718]: E0123 16:19:47.808925 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-967v4" podUID="219b51b6-4118-4212-94a2-48d6b2116112" Jan 23 16:19:47 crc kubenswrapper[4718]: E0123 16:19:47.826696 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 16:19:47 crc kubenswrapper[4718]: E0123 16:19:47.827241 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sv7wc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9mxjz_openshift-marketplace(abdcb612-10fc-49af-97ef-a8265d36bb9d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 16:19:47 crc kubenswrapper[4718]: E0123 16:19:47.828447 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9mxjz" podUID="abdcb612-10fc-49af-97ef-a8265d36bb9d" Jan 23 16:19:48 crc kubenswrapper[4718]: I0123 16:19:48.129167 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dppxp"] Jan 23 16:19:48 crc kubenswrapper[4718]: W0123 16:19:48.150375 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod593a4237_c13e_4403_b139_f32b552ca770.slice/crio-66148282a7650a80d4d4c7c6c2a23e8caf21a66ef96a357522238a4fdf05b3d5 WatchSource:0}: Error finding container 66148282a7650a80d4d4c7c6c2a23e8caf21a66ef96a357522238a4fdf05b3d5: Status 404 returned error can't find the container with id 66148282a7650a80d4d4c7c6c2a23e8caf21a66ef96a357522238a4fdf05b3d5 Jan 23 16:19:48 crc kubenswrapper[4718]: I0123 16:19:48.204428 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 16:19:48 crc kubenswrapper[4718]: W0123 16:19:48.215225 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode1245184_0423_4328_aace_30c8c7d03989.slice/crio-4fa472037106651dc5d72e29c08393adf61378809b5f0e65129036ee7c2a40ce WatchSource:0}: Error finding container 4fa472037106651dc5d72e29c08393adf61378809b5f0e65129036ee7c2a40ce: Status 404 returned error can't find the container with id 4fa472037106651dc5d72e29c08393adf61378809b5f0e65129036ee7c2a40ce Jan 23 16:19:48 crc kubenswrapper[4718]: I0123 16:19:48.243916 4718 generic.go:334] "Generic (PLEG): container finished" podID="7f628e92-a5fe-4048-81c7-33b3f6a11792" containerID="5abfd42956c25fd1cafc3b06f36e6df9d0cbdd7bd3214e5819556378402dbd48" exitCode=0 Jan 23 16:19:48 crc kubenswrapper[4718]: I0123 16:19:48.244264 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9j9tn" event={"ID":"7f628e92-a5fe-4048-81c7-33b3f6a11792","Type":"ContainerDied","Data":"5abfd42956c25fd1cafc3b06f36e6df9d0cbdd7bd3214e5819556378402dbd48"} Jan 23 16:19:48 crc kubenswrapper[4718]: I0123 16:19:48.251201 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e1245184-0423-4328-aace-30c8c7d03989","Type":"ContainerStarted","Data":"4fa472037106651dc5d72e29c08393adf61378809b5f0e65129036ee7c2a40ce"} Jan 23 16:19:48 crc kubenswrapper[4718]: I0123 16:19:48.253970 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dppxp" event={"ID":"593a4237-c13e-4403-b139-f32b552ca770","Type":"ContainerStarted","Data":"66148282a7650a80d4d4c7c6c2a23e8caf21a66ef96a357522238a4fdf05b3d5"} Jan 23 16:19:48 crc kubenswrapper[4718]: I0123 16:19:48.272856 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq6kd" event={"ID":"d44f3dd6-7295-4fb3-b29b-78dac567ffff","Type":"ContainerStarted","Data":"eff3b59b54ca541fd49622572279ec1482e3af7f9b230c5a9c7b3cde48bac34b"} Jan 23 16:19:48 crc kubenswrapper[4718]: E0123 16:19:48.279575 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-967v4" podUID="219b51b6-4118-4212-94a2-48d6b2116112" Jan 23 16:19:48 crc kubenswrapper[4718]: E0123 16:19:48.280346 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9mxjz" podUID="abdcb612-10fc-49af-97ef-a8265d36bb9d" Jan 23 16:19:49 crc kubenswrapper[4718]: I0123 16:19:49.281508 4718 generic.go:334] "Generic (PLEG): container finished" podID="e1245184-0423-4328-aace-30c8c7d03989" containerID="4ac63b1f53673d71e03adc81ee2d96ed0b9705b5dec8ad0993d4585d8e00544e" exitCode=0 Jan 23 16:19:49 crc kubenswrapper[4718]: I0123 16:19:49.281622 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e1245184-0423-4328-aace-30c8c7d03989","Type":"ContainerDied","Data":"4ac63b1f53673d71e03adc81ee2d96ed0b9705b5dec8ad0993d4585d8e00544e"} Jan 23 16:19:49 crc kubenswrapper[4718]: I0123 16:19:49.309672 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dppxp" event={"ID":"593a4237-c13e-4403-b139-f32b552ca770","Type":"ContainerStarted","Data":"c228eeaf43728a53c708a0cd7739b46871054cbf8d80b3374c578cc1fb755feb"} Jan 23 16:19:49 crc kubenswrapper[4718]: I0123 16:19:49.309743 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dppxp" event={"ID":"593a4237-c13e-4403-b139-f32b552ca770","Type":"ContainerStarted","Data":"0359f3a857a29c0f4879244f2be0146373997da12cc229b1ce0d19a13032b79f"} Jan 23 16:19:49 crc kubenswrapper[4718]: I0123 16:19:49.315063 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9j9tn" event={"ID":"7f628e92-a5fe-4048-81c7-33b3f6a11792","Type":"ContainerStarted","Data":"2324f3b569a11c4c1b7449a8df88339f4e7d15fda3be29efe0230fc24ab24ab3"} Jan 23 16:19:49 crc kubenswrapper[4718]: I0123 16:19:49.317437 4718 generic.go:334] "Generic (PLEG): container finished" podID="d44f3dd6-7295-4fb3-b29b-78dac567ffff" containerID="eff3b59b54ca541fd49622572279ec1482e3af7f9b230c5a9c7b3cde48bac34b" exitCode=0 Jan 23 16:19:49 crc kubenswrapper[4718]: I0123 16:19:49.317477 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq6kd" event={"ID":"d44f3dd6-7295-4fb3-b29b-78dac567ffff","Type":"ContainerDied","Data":"eff3b59b54ca541fd49622572279ec1482e3af7f9b230c5a9c7b3cde48bac34b"} Jan 23 16:19:49 crc kubenswrapper[4718]: I0123 16:19:49.332675 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-dppxp" podStartSLOduration=171.332647907 podStartE2EDuration="2m51.332647907s" podCreationTimestamp="2026-01-23 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:49.327217329 +0000 UTC m=+190.474459330" watchObservedRunningTime="2026-01-23 16:19:49.332647907 +0000 UTC m=+190.479889908" Jan 23 16:19:49 crc kubenswrapper[4718]: I0123 16:19:49.355075 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9j9tn" podStartSLOduration=2.191112782 podStartE2EDuration="42.355054729s" podCreationTimestamp="2026-01-23 16:19:07 +0000 UTC" firstStartedPulling="2026-01-23 16:19:08.880554795 +0000 UTC m=+150.027796786" lastFinishedPulling="2026-01-23 16:19:49.044496742 +0000 UTC m=+190.191738733" observedRunningTime="2026-01-23 16:19:49.350764719 +0000 UTC m=+190.498006740" watchObservedRunningTime="2026-01-23 16:19:49.355054729 +0000 UTC m=+190.502296720" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.336205 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq6kd" event={"ID":"d44f3dd6-7295-4fb3-b29b-78dac567ffff","Type":"ContainerStarted","Data":"8aa740b8ba2d3482cbf5a99a64da6597e85fdf9c25f7f3bafe4108cbf95b0e68"} Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.364071 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jq6kd" podStartSLOduration=2.476328258 podStartE2EDuration="42.364043764s" podCreationTimestamp="2026-01-23 16:19:08 +0000 UTC" firstStartedPulling="2026-01-23 16:19:09.925437866 +0000 UTC m=+151.072679857" lastFinishedPulling="2026-01-23 16:19:49.813153362 +0000 UTC m=+190.960395363" observedRunningTime="2026-01-23 16:19:50.360517234 +0000 UTC m=+191.507759305" watchObservedRunningTime="2026-01-23 16:19:50.364043764 +0000 UTC m=+191.511285755" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.641468 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.730261 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 16:19:50 crc kubenswrapper[4718]: E0123 16:19:50.730564 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1245184-0423-4328-aace-30c8c7d03989" containerName="pruner" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.730577 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1245184-0423-4328-aace-30c8c7d03989" containerName="pruner" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.730689 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1245184-0423-4328-aace-30c8c7d03989" containerName="pruner" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.731204 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.741461 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1245184-0423-4328-aace-30c8c7d03989-kubelet-dir\") pod \"e1245184-0423-4328-aace-30c8c7d03989\" (UID: \"e1245184-0423-4328-aace-30c8c7d03989\") " Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.741602 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1245184-0423-4328-aace-30c8c7d03989-kube-api-access\") pod \"e1245184-0423-4328-aace-30c8c7d03989\" (UID: \"e1245184-0423-4328-aace-30c8c7d03989\") " Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.741931 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86dfa729-72c2-42d8-a66f-00e7b0ed98d6-kube-api-access\") pod \"installer-9-crc\" (UID: \"86dfa729-72c2-42d8-a66f-00e7b0ed98d6\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.742006 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/86dfa729-72c2-42d8-a66f-00e7b0ed98d6-var-lock\") pod \"installer-9-crc\" (UID: \"86dfa729-72c2-42d8-a66f-00e7b0ed98d6\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.742062 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86dfa729-72c2-42d8-a66f-00e7b0ed98d6-kubelet-dir\") pod \"installer-9-crc\" (UID: \"86dfa729-72c2-42d8-a66f-00e7b0ed98d6\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.742202 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1245184-0423-4328-aace-30c8c7d03989-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e1245184-0423-4328-aace-30c8c7d03989" (UID: "e1245184-0423-4328-aace-30c8c7d03989"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.746713 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.764458 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1245184-0423-4328-aace-30c8c7d03989-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e1245184-0423-4328-aace-30c8c7d03989" (UID: "e1245184-0423-4328-aace-30c8c7d03989"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.843196 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86dfa729-72c2-42d8-a66f-00e7b0ed98d6-kubelet-dir\") pod \"installer-9-crc\" (UID: \"86dfa729-72c2-42d8-a66f-00e7b0ed98d6\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.843286 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86dfa729-72c2-42d8-a66f-00e7b0ed98d6-kube-api-access\") pod \"installer-9-crc\" (UID: \"86dfa729-72c2-42d8-a66f-00e7b0ed98d6\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.843341 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/86dfa729-72c2-42d8-a66f-00e7b0ed98d6-var-lock\") pod \"installer-9-crc\" (UID: \"86dfa729-72c2-42d8-a66f-00e7b0ed98d6\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.843376 4718 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1245184-0423-4328-aace-30c8c7d03989-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.843388 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1245184-0423-4328-aace-30c8c7d03989-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.843380 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86dfa729-72c2-42d8-a66f-00e7b0ed98d6-kubelet-dir\") pod \"installer-9-crc\" (UID: \"86dfa729-72c2-42d8-a66f-00e7b0ed98d6\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.843427 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/86dfa729-72c2-42d8-a66f-00e7b0ed98d6-var-lock\") pod \"installer-9-crc\" (UID: \"86dfa729-72c2-42d8-a66f-00e7b0ed98d6\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.880644 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86dfa729-72c2-42d8-a66f-00e7b0ed98d6-kube-api-access\") pod \"installer-9-crc\" (UID: \"86dfa729-72c2-42d8-a66f-00e7b0ed98d6\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:19:50 crc kubenswrapper[4718]: I0123 16:19:50.961143 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hrs87"] Jan 23 16:19:51 crc kubenswrapper[4718]: I0123 16:19:51.053647 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:19:51 crc kubenswrapper[4718]: I0123 16:19:51.345057 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:19:51 crc kubenswrapper[4718]: I0123 16:19:51.345213 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e1245184-0423-4328-aace-30c8c7d03989","Type":"ContainerDied","Data":"4fa472037106651dc5d72e29c08393adf61378809b5f0e65129036ee7c2a40ce"} Jan 23 16:19:51 crc kubenswrapper[4718]: I0123 16:19:51.346095 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fa472037106651dc5d72e29c08393adf61378809b5f0e65129036ee7c2a40ce" Jan 23 16:19:51 crc kubenswrapper[4718]: I0123 16:19:51.509080 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 16:19:52 crc kubenswrapper[4718]: I0123 16:19:52.351279 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"86dfa729-72c2-42d8-a66f-00e7b0ed98d6","Type":"ContainerStarted","Data":"c067add4a1fc22ee88a0b31d9bbc04e87358383c6ac1be335c77e45331b3ffed"} Jan 23 16:19:52 crc kubenswrapper[4718]: I0123 16:19:52.351706 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"86dfa729-72c2-42d8-a66f-00e7b0ed98d6","Type":"ContainerStarted","Data":"b28bb4f162a4484efc294f324fb505efd8a8dbae6c3c6eaff6c4e0001d55cfb9"} Jan 23 16:19:52 crc kubenswrapper[4718]: I0123 16:19:52.367708 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.367688318 podStartE2EDuration="2.367688318s" podCreationTimestamp="2026-01-23 16:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:19:52.365098902 +0000 UTC m=+193.512340893" watchObservedRunningTime="2026-01-23 16:19:52.367688318 +0000 UTC m=+193.514930309" Jan 23 16:19:57 crc kubenswrapper[4718]: I0123 16:19:57.714261 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9j9tn" Jan 23 16:19:57 crc kubenswrapper[4718]: I0123 16:19:57.714962 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9j9tn" Jan 23 16:19:57 crc kubenswrapper[4718]: I0123 16:19:57.789160 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9j9tn" Jan 23 16:19:58 crc kubenswrapper[4718]: I0123 16:19:58.393859 4718 generic.go:334] "Generic (PLEG): container finished" podID="8be4ad13-7119-48b8-9f6e-3848463eba75" containerID="8edf710e6e7a865bfef15ca1d24acfaadb5680f101e9ab873ccfb7ca4c7fc878" exitCode=0 Jan 23 16:19:58 crc kubenswrapper[4718]: I0123 16:19:58.393904 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smckr" event={"ID":"8be4ad13-7119-48b8-9f6e-3848463eba75","Type":"ContainerDied","Data":"8edf710e6e7a865bfef15ca1d24acfaadb5680f101e9ab873ccfb7ca4c7fc878"} Jan 23 16:19:58 crc kubenswrapper[4718]: I0123 16:19:58.401082 4718 generic.go:334] "Generic (PLEG): container finished" podID="4b400da1-43ec-45d6-87ee-3baea6d9f22d" containerID="67098824ab9b442386b31fa6aaec2a7b69f1c7098c4bf60aa54427c698a32bf7" exitCode=0 Jan 23 16:19:58 crc kubenswrapper[4718]: I0123 16:19:58.401178 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7jnq" event={"ID":"4b400da1-43ec-45d6-87ee-3baea6d9f22d","Type":"ContainerDied","Data":"67098824ab9b442386b31fa6aaec2a7b69f1c7098c4bf60aa54427c698a32bf7"} Jan 23 16:19:58 crc kubenswrapper[4718]: I0123 16:19:58.446595 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9j9tn" Jan 23 16:19:58 crc kubenswrapper[4718]: I0123 16:19:58.555261 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jq6kd" Jan 23 16:19:58 crc kubenswrapper[4718]: I0123 16:19:58.555351 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jq6kd" Jan 23 16:19:58 crc kubenswrapper[4718]: I0123 16:19:58.596492 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jq6kd" Jan 23 16:19:58 crc kubenswrapper[4718]: I0123 16:19:58.875797 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:19:58 crc kubenswrapper[4718]: I0123 16:19:58.876583 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:19:59 crc kubenswrapper[4718]: I0123 16:19:59.453289 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jq6kd" Jan 23 16:20:00 crc kubenswrapper[4718]: I0123 16:20:00.177943 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9j9tn"] Jan 23 16:20:00 crc kubenswrapper[4718]: I0123 16:20:00.412861 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9j9tn" podUID="7f628e92-a5fe-4048-81c7-33b3f6a11792" containerName="registry-server" containerID="cri-o://2324f3b569a11c4c1b7449a8df88339f4e7d15fda3be29efe0230fc24ab24ab3" gracePeriod=2 Jan 23 16:20:01 crc kubenswrapper[4718]: I0123 16:20:01.421991 4718 generic.go:334] "Generic (PLEG): container finished" podID="7f628e92-a5fe-4048-81c7-33b3f6a11792" containerID="2324f3b569a11c4c1b7449a8df88339f4e7d15fda3be29efe0230fc24ab24ab3" exitCode=0 Jan 23 16:20:01 crc kubenswrapper[4718]: I0123 16:20:01.422237 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9j9tn" event={"ID":"7f628e92-a5fe-4048-81c7-33b3f6a11792","Type":"ContainerDied","Data":"2324f3b569a11c4c1b7449a8df88339f4e7d15fda3be29efe0230fc24ab24ab3"} Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.197431 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9j9tn" Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.303791 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2txt\" (UniqueName: \"kubernetes.io/projected/7f628e92-a5fe-4048-81c7-33b3f6a11792-kube-api-access-g2txt\") pod \"7f628e92-a5fe-4048-81c7-33b3f6a11792\" (UID: \"7f628e92-a5fe-4048-81c7-33b3f6a11792\") " Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.303962 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f628e92-a5fe-4048-81c7-33b3f6a11792-utilities\") pod \"7f628e92-a5fe-4048-81c7-33b3f6a11792\" (UID: \"7f628e92-a5fe-4048-81c7-33b3f6a11792\") " Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.304005 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f628e92-a5fe-4048-81c7-33b3f6a11792-catalog-content\") pod \"7f628e92-a5fe-4048-81c7-33b3f6a11792\" (UID: \"7f628e92-a5fe-4048-81c7-33b3f6a11792\") " Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.304740 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f628e92-a5fe-4048-81c7-33b3f6a11792-utilities" (OuterVolumeSpecName: "utilities") pod "7f628e92-a5fe-4048-81c7-33b3f6a11792" (UID: "7f628e92-a5fe-4048-81c7-33b3f6a11792"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.306469 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f628e92-a5fe-4048-81c7-33b3f6a11792-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.312935 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f628e92-a5fe-4048-81c7-33b3f6a11792-kube-api-access-g2txt" (OuterVolumeSpecName: "kube-api-access-g2txt") pod "7f628e92-a5fe-4048-81c7-33b3f6a11792" (UID: "7f628e92-a5fe-4048-81c7-33b3f6a11792"). InnerVolumeSpecName "kube-api-access-g2txt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.326449 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f628e92-a5fe-4048-81c7-33b3f6a11792-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f628e92-a5fe-4048-81c7-33b3f6a11792" (UID: "7f628e92-a5fe-4048-81c7-33b3f6a11792"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.407956 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f628e92-a5fe-4048-81c7-33b3f6a11792-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.407995 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2txt\" (UniqueName: \"kubernetes.io/projected/7f628e92-a5fe-4048-81c7-33b3f6a11792-kube-api-access-g2txt\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.431668 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smckr" event={"ID":"8be4ad13-7119-48b8-9f6e-3848463eba75","Type":"ContainerStarted","Data":"f1909a38e09f0bd49a34922b5c837a9b4c9f795724a707cd28f00f993f5a91e1"} Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.433360 4718 generic.go:334] "Generic (PLEG): container finished" podID="dace0865-fb1e-42df-b857-85285c561bb8" containerID="0599825560b02de4f4dbe2f6eced17af89fe00a1758d5da43e968a83b3db2dab" exitCode=0 Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.433450 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xdfb" event={"ID":"dace0865-fb1e-42df-b857-85285c561bb8","Type":"ContainerDied","Data":"0599825560b02de4f4dbe2f6eced17af89fe00a1758d5da43e968a83b3db2dab"} Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.435769 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7jnq" event={"ID":"4b400da1-43ec-45d6-87ee-3baea6d9f22d","Type":"ContainerStarted","Data":"ef208ab2375e4308422629d469c0756d84be3bc5a206efdd8ee8637ff646d868"} Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.438073 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9j9tn" event={"ID":"7f628e92-a5fe-4048-81c7-33b3f6a11792","Type":"ContainerDied","Data":"d651ad5f963bb909f784b8f790d15e4f229ead69d9e0ccdfce54415ec34dc98e"} Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.438109 4718 scope.go:117] "RemoveContainer" containerID="2324f3b569a11c4c1b7449a8df88339f4e7d15fda3be29efe0230fc24ab24ab3" Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.438204 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9j9tn" Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.456174 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-smckr" podStartSLOduration=3.477069968 podStartE2EDuration="56.456154743s" podCreationTimestamp="2026-01-23 16:19:06 +0000 UTC" firstStartedPulling="2026-01-23 16:19:08.859110977 +0000 UTC m=+150.006352968" lastFinishedPulling="2026-01-23 16:20:01.838195752 +0000 UTC m=+202.985437743" observedRunningTime="2026-01-23 16:20:02.452992079 +0000 UTC m=+203.600234080" watchObservedRunningTime="2026-01-23 16:20:02.456154743 +0000 UTC m=+203.603396734" Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.461326 4718 scope.go:117] "RemoveContainer" containerID="5abfd42956c25fd1cafc3b06f36e6df9d0cbdd7bd3214e5819556378402dbd48" Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.491253 4718 scope.go:117] "RemoveContainer" containerID="59ca1726187d03c52b3d0524933dcd6f068a92665167ad6726d079797f00a9bc" Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.497901 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p7jnq" podStartSLOduration=3.301880045 podStartE2EDuration="57.497869564s" podCreationTimestamp="2026-01-23 16:19:05 +0000 UTC" firstStartedPulling="2026-01-23 16:19:07.78069475 +0000 UTC m=+148.927936741" lastFinishedPulling="2026-01-23 16:20:01.976684269 +0000 UTC m=+203.123926260" observedRunningTime="2026-01-23 16:20:02.492855041 +0000 UTC m=+203.640097032" watchObservedRunningTime="2026-01-23 16:20:02.497869564 +0000 UTC m=+203.645111555" Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.515492 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9j9tn"] Jan 23 16:20:02 crc kubenswrapper[4718]: I0123 16:20:02.521011 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9j9tn"] Jan 23 16:20:03 crc kubenswrapper[4718]: I0123 16:20:03.149510 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f628e92-a5fe-4048-81c7-33b3f6a11792" path="/var/lib/kubelet/pods/7f628e92-a5fe-4048-81c7-33b3f6a11792/volumes" Jan 23 16:20:03 crc kubenswrapper[4718]: I0123 16:20:03.445493 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xdfb" event={"ID":"dace0865-fb1e-42df-b857-85285c561bb8","Type":"ContainerStarted","Data":"f1c0d40ad058b0781c1d8c1923e9b40b576f83f5b8cb1dea84f21f50624c8aef"} Jan 23 16:20:03 crc kubenswrapper[4718]: I0123 16:20:03.448103 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdbp8" event={"ID":"b8bc3317-7018-4527-a8f8-7d27072bd326","Type":"ContainerStarted","Data":"b7f69bab2719a11aa4311c561403c6d8f12ce38f59494021cbc650bf0f9b6be5"} Jan 23 16:20:03 crc kubenswrapper[4718]: I0123 16:20:03.464809 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9xdfb" podStartSLOduration=2.110047716 podStartE2EDuration="58.464781066s" podCreationTimestamp="2026-01-23 16:19:05 +0000 UTC" firstStartedPulling="2026-01-23 16:19:06.751194022 +0000 UTC m=+147.898436013" lastFinishedPulling="2026-01-23 16:20:03.105927362 +0000 UTC m=+204.253169363" observedRunningTime="2026-01-23 16:20:03.461708873 +0000 UTC m=+204.608950874" watchObservedRunningTime="2026-01-23 16:20:03.464781066 +0000 UTC m=+204.612023067" Jan 23 16:20:04 crc kubenswrapper[4718]: I0123 16:20:04.458238 4718 generic.go:334] "Generic (PLEG): container finished" podID="219b51b6-4118-4212-94a2-48d6b2116112" containerID="ea17b08532652562d38b480a211b0105a7dbce38ca386e1ab8afa0a4a852c4a4" exitCode=0 Jan 23 16:20:04 crc kubenswrapper[4718]: I0123 16:20:04.458849 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-967v4" event={"ID":"219b51b6-4118-4212-94a2-48d6b2116112","Type":"ContainerDied","Data":"ea17b08532652562d38b480a211b0105a7dbce38ca386e1ab8afa0a4a852c4a4"} Jan 23 16:20:04 crc kubenswrapper[4718]: I0123 16:20:04.461859 4718 generic.go:334] "Generic (PLEG): container finished" podID="b8bc3317-7018-4527-a8f8-7d27072bd326" containerID="b7f69bab2719a11aa4311c561403c6d8f12ce38f59494021cbc650bf0f9b6be5" exitCode=0 Jan 23 16:20:04 crc kubenswrapper[4718]: I0123 16:20:04.461921 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdbp8" event={"ID":"b8bc3317-7018-4527-a8f8-7d27072bd326","Type":"ContainerDied","Data":"b7f69bab2719a11aa4311c561403c6d8f12ce38f59494021cbc650bf0f9b6be5"} Jan 23 16:20:05 crc kubenswrapper[4718]: I0123 16:20:05.512380 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9xdfb" Jan 23 16:20:05 crc kubenswrapper[4718]: I0123 16:20:05.513342 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9xdfb" Jan 23 16:20:05 crc kubenswrapper[4718]: I0123 16:20:05.556360 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9xdfb" Jan 23 16:20:05 crc kubenswrapper[4718]: I0123 16:20:05.989348 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p7jnq" Jan 23 16:20:05 crc kubenswrapper[4718]: I0123 16:20:05.989413 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p7jnq" Jan 23 16:20:06 crc kubenswrapper[4718]: I0123 16:20:06.044216 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p7jnq" Jan 23 16:20:07 crc kubenswrapper[4718]: I0123 16:20:07.333918 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-smckr" Jan 23 16:20:07 crc kubenswrapper[4718]: I0123 16:20:07.334265 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-smckr" Jan 23 16:20:07 crc kubenswrapper[4718]: I0123 16:20:07.383784 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-smckr" Jan 23 16:20:07 crc kubenswrapper[4718]: I0123 16:20:07.518289 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-smckr" Jan 23 16:20:09 crc kubenswrapper[4718]: I0123 16:20:09.495501 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mxjz" event={"ID":"abdcb612-10fc-49af-97ef-a8265d36bb9d","Type":"ContainerStarted","Data":"456769cd91e3bc3951c36c1b406d3e03f2a9fadb29a18b76cbf2816dd086a20f"} Jan 23 16:20:09 crc kubenswrapper[4718]: I0123 16:20:09.497745 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdbp8" event={"ID":"b8bc3317-7018-4527-a8f8-7d27072bd326","Type":"ContainerStarted","Data":"a7a1e7e56e56c2191745b0f3fcc82f0002f0e73ab59e0049b72c6d019fa84edf"} Jan 23 16:20:09 crc kubenswrapper[4718]: I0123 16:20:09.500753 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-967v4" event={"ID":"219b51b6-4118-4212-94a2-48d6b2116112","Type":"ContainerStarted","Data":"e2140176cc0669f4ccbdfc94d80453fac6eeee88e399c04ebfa79c74dddbe69b"} Jan 23 16:20:09 crc kubenswrapper[4718]: I0123 16:20:09.566501 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sdbp8" podStartSLOduration=4.036227564 podStartE2EDuration="1m4.566475898s" podCreationTimestamp="2026-01-23 16:19:05 +0000 UTC" firstStartedPulling="2026-01-23 16:19:07.809102615 +0000 UTC m=+148.956344606" lastFinishedPulling="2026-01-23 16:20:08.339350949 +0000 UTC m=+209.486592940" observedRunningTime="2026-01-23 16:20:09.540245319 +0000 UTC m=+210.687487320" watchObservedRunningTime="2026-01-23 16:20:09.566475898 +0000 UTC m=+210.713717889" Jan 23 16:20:10 crc kubenswrapper[4718]: I0123 16:20:10.507058 4718 generic.go:334] "Generic (PLEG): container finished" podID="abdcb612-10fc-49af-97ef-a8265d36bb9d" containerID="456769cd91e3bc3951c36c1b406d3e03f2a9fadb29a18b76cbf2816dd086a20f" exitCode=0 Jan 23 16:20:10 crc kubenswrapper[4718]: I0123 16:20:10.507106 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mxjz" event={"ID":"abdcb612-10fc-49af-97ef-a8265d36bb9d","Type":"ContainerDied","Data":"456769cd91e3bc3951c36c1b406d3e03f2a9fadb29a18b76cbf2816dd086a20f"} Jan 23 16:20:10 crc kubenswrapper[4718]: I0123 16:20:10.528842 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-967v4" podStartSLOduration=4.921074368 podStartE2EDuration="1m6.528817727s" podCreationTimestamp="2026-01-23 16:19:04 +0000 UTC" firstStartedPulling="2026-01-23 16:19:06.751479849 +0000 UTC m=+147.898721840" lastFinishedPulling="2026-01-23 16:20:08.359223178 +0000 UTC m=+209.506465199" observedRunningTime="2026-01-23 16:20:09.567246158 +0000 UTC m=+210.714488179" watchObservedRunningTime="2026-01-23 16:20:10.528817727 +0000 UTC m=+211.676059718" Jan 23 16:20:11 crc kubenswrapper[4718]: I0123 16:20:11.516663 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mxjz" event={"ID":"abdcb612-10fc-49af-97ef-a8265d36bb9d","Type":"ContainerStarted","Data":"abacf225f11baa8141edab010e3d612a15d6edfba90f5ab71db4255ad7371b25"} Jan 23 16:20:11 crc kubenswrapper[4718]: I0123 16:20:11.546273 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9mxjz" podStartSLOduration=2.285293933 podStartE2EDuration="1m3.546245173s" podCreationTimestamp="2026-01-23 16:19:08 +0000 UTC" firstStartedPulling="2026-01-23 16:19:09.891431298 +0000 UTC m=+151.038673289" lastFinishedPulling="2026-01-23 16:20:11.152382508 +0000 UTC m=+212.299624529" observedRunningTime="2026-01-23 16:20:11.544405625 +0000 UTC m=+212.691647636" watchObservedRunningTime="2026-01-23 16:20:11.546245173 +0000 UTC m=+212.693487174" Jan 23 16:20:15 crc kubenswrapper[4718]: I0123 16:20:15.412480 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-967v4" Jan 23 16:20:15 crc kubenswrapper[4718]: I0123 16:20:15.413720 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-967v4" Jan 23 16:20:15 crc kubenswrapper[4718]: I0123 16:20:15.448840 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-967v4" Jan 23 16:20:15 crc kubenswrapper[4718]: I0123 16:20:15.581342 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9xdfb" Jan 23 16:20:15 crc kubenswrapper[4718]: I0123 16:20:15.589622 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-967v4" Jan 23 16:20:15 crc kubenswrapper[4718]: I0123 16:20:15.836199 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sdbp8" Jan 23 16:20:15 crc kubenswrapper[4718]: I0123 16:20:15.836266 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sdbp8" Jan 23 16:20:15 crc kubenswrapper[4718]: I0123 16:20:15.881456 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sdbp8" Jan 23 16:20:15 crc kubenswrapper[4718]: I0123 16:20:15.991914 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" podUID="c96e05b0-db9f-4670-839d-f15b53eeffc6" containerName="oauth-openshift" containerID="cri-o://007c5bd180ddb889d72d361e90496194e7cb8cd66029096efc9980027e109ce2" gracePeriod=15 Jan 23 16:20:16 crc kubenswrapper[4718]: I0123 16:20:16.071232 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p7jnq" Jan 23 16:20:16 crc kubenswrapper[4718]: I0123 16:20:16.618093 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sdbp8" Jan 23 16:20:17 crc kubenswrapper[4718]: I0123 16:20:17.051579 4718 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-hrs87 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" start-of-body= Jan 23 16:20:17 crc kubenswrapper[4718]: I0123 16:20:17.051716 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" podUID="c96e05b0-db9f-4670-839d-f15b53eeffc6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.255525 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.288092 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7b964c775c-ct48x"] Jan 23 16:20:18 crc kubenswrapper[4718]: E0123 16:20:18.288305 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c96e05b0-db9f-4670-839d-f15b53eeffc6" containerName="oauth-openshift" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.288318 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c96e05b0-db9f-4670-839d-f15b53eeffc6" containerName="oauth-openshift" Jan 23 16:20:18 crc kubenswrapper[4718]: E0123 16:20:18.288328 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f628e92-a5fe-4048-81c7-33b3f6a11792" containerName="registry-server" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.288334 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f628e92-a5fe-4048-81c7-33b3f6a11792" containerName="registry-server" Jan 23 16:20:18 crc kubenswrapper[4718]: E0123 16:20:18.288341 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f628e92-a5fe-4048-81c7-33b3f6a11792" containerName="extract-content" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.288348 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f628e92-a5fe-4048-81c7-33b3f6a11792" containerName="extract-content" Jan 23 16:20:18 crc kubenswrapper[4718]: E0123 16:20:18.288357 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f628e92-a5fe-4048-81c7-33b3f6a11792" containerName="extract-utilities" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.288362 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f628e92-a5fe-4048-81c7-33b3f6a11792" containerName="extract-utilities" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.288453 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c96e05b0-db9f-4670-839d-f15b53eeffc6" containerName="oauth-openshift" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.288462 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f628e92-a5fe-4048-81c7-33b3f6a11792" containerName="registry-server" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.288889 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.305767 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b964c775c-ct48x"] Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.357813 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-cliconfig\") pod \"c96e05b0-db9f-4670-839d-f15b53eeffc6\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.357858 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-trusted-ca-bundle\") pod \"c96e05b0-db9f-4670-839d-f15b53eeffc6\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.357913 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-router-certs\") pod \"c96e05b0-db9f-4670-839d-f15b53eeffc6\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.357932 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-ocp-branding-template\") pod \"c96e05b0-db9f-4670-839d-f15b53eeffc6\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.357955 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-idp-0-file-data\") pod \"c96e05b0-db9f-4670-839d-f15b53eeffc6\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.357979 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-template-login\") pod \"c96e05b0-db9f-4670-839d-f15b53eeffc6\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.358003 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-template-provider-selection\") pod \"c96e05b0-db9f-4670-839d-f15b53eeffc6\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.358686 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c96e05b0-db9f-4670-839d-f15b53eeffc6" (UID: "c96e05b0-db9f-4670-839d-f15b53eeffc6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.358839 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c96e05b0-db9f-4670-839d-f15b53eeffc6-audit-dir\") pod \"c96e05b0-db9f-4670-839d-f15b53eeffc6\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.358866 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-service-ca\") pod \"c96e05b0-db9f-4670-839d-f15b53eeffc6\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.358892 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-template-error\") pod \"c96e05b0-db9f-4670-839d-f15b53eeffc6\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.358909 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6zd2\" (UniqueName: \"kubernetes.io/projected/c96e05b0-db9f-4670-839d-f15b53eeffc6-kube-api-access-f6zd2\") pod \"c96e05b0-db9f-4670-839d-f15b53eeffc6\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.358928 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-session\") pod \"c96e05b0-db9f-4670-839d-f15b53eeffc6\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.358968 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-audit-policies\") pod \"c96e05b0-db9f-4670-839d-f15b53eeffc6\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.358864 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c96e05b0-db9f-4670-839d-f15b53eeffc6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c96e05b0-db9f-4670-839d-f15b53eeffc6" (UID: "c96e05b0-db9f-4670-839d-f15b53eeffc6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.358999 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-serving-cert\") pod \"c96e05b0-db9f-4670-839d-f15b53eeffc6\" (UID: \"c96e05b0-db9f-4670-839d-f15b53eeffc6\") " Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.358969 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c96e05b0-db9f-4670-839d-f15b53eeffc6" (UID: "c96e05b0-db9f-4670-839d-f15b53eeffc6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.359200 4718 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c96e05b0-db9f-4670-839d-f15b53eeffc6-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.359213 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.359224 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.359294 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c96e05b0-db9f-4670-839d-f15b53eeffc6" (UID: "c96e05b0-db9f-4670-839d-f15b53eeffc6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.359472 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c96e05b0-db9f-4670-839d-f15b53eeffc6" (UID: "c96e05b0-db9f-4670-839d-f15b53eeffc6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.364250 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c96e05b0-db9f-4670-839d-f15b53eeffc6-kube-api-access-f6zd2" (OuterVolumeSpecName: "kube-api-access-f6zd2") pod "c96e05b0-db9f-4670-839d-f15b53eeffc6" (UID: "c96e05b0-db9f-4670-839d-f15b53eeffc6"). InnerVolumeSpecName "kube-api-access-f6zd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.364304 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c96e05b0-db9f-4670-839d-f15b53eeffc6" (UID: "c96e05b0-db9f-4670-839d-f15b53eeffc6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.370093 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c96e05b0-db9f-4670-839d-f15b53eeffc6" (UID: "c96e05b0-db9f-4670-839d-f15b53eeffc6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.370921 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c96e05b0-db9f-4670-839d-f15b53eeffc6" (UID: "c96e05b0-db9f-4670-839d-f15b53eeffc6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.371985 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c96e05b0-db9f-4670-839d-f15b53eeffc6" (UID: "c96e05b0-db9f-4670-839d-f15b53eeffc6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.373927 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c96e05b0-db9f-4670-839d-f15b53eeffc6" (UID: "c96e05b0-db9f-4670-839d-f15b53eeffc6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.375863 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c96e05b0-db9f-4670-839d-f15b53eeffc6" (UID: "c96e05b0-db9f-4670-839d-f15b53eeffc6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.378447 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c96e05b0-db9f-4670-839d-f15b53eeffc6" (UID: "c96e05b0-db9f-4670-839d-f15b53eeffc6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.380937 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c96e05b0-db9f-4670-839d-f15b53eeffc6" (UID: "c96e05b0-db9f-4670-839d-f15b53eeffc6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.459902 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-session\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.459963 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.459985 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-template-login\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460008 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460031 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w24kn\" (UniqueName: \"kubernetes.io/projected/65a1ab25-cdad-4128-a946-070231fd85fb-kube-api-access-w24kn\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460117 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460151 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460169 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460200 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460226 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-template-error\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460251 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65a1ab25-cdad-4128-a946-070231fd85fb-audit-dir\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460270 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460295 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460322 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-audit-policies\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460379 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460393 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460402 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460413 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460422 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460435 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460443 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460452 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6zd2\" (UniqueName: \"kubernetes.io/projected/c96e05b0-db9f-4670-839d-f15b53eeffc6-kube-api-access-f6zd2\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460461 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460469 4718 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c96e05b0-db9f-4670-839d-f15b53eeffc6-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.460478 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c96e05b0-db9f-4670-839d-f15b53eeffc6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.561136 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.561182 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-audit-policies\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.561207 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-session\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.561234 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.561254 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-template-login\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.561276 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.561298 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w24kn\" (UniqueName: \"kubernetes.io/projected/65a1ab25-cdad-4128-a946-070231fd85fb-kube-api-access-w24kn\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.561324 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.561348 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.561367 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.561399 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.561420 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-template-error\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.561444 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65a1ab25-cdad-4128-a946-070231fd85fb-audit-dir\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.561465 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.562775 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-audit-policies\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.563962 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.565057 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65a1ab25-cdad-4128-a946-070231fd85fb-audit-dir\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.565464 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.566484 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-template-login\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.567959 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.569764 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-session\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.572129 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-template-error\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.580506 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.580541 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.584866 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.585860 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.588282 4718 generic.go:334] "Generic (PLEG): container finished" podID="c96e05b0-db9f-4670-839d-f15b53eeffc6" containerID="007c5bd180ddb889d72d361e90496194e7cb8cd66029096efc9980027e109ce2" exitCode=0 Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.588342 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" event={"ID":"c96e05b0-db9f-4670-839d-f15b53eeffc6","Type":"ContainerDied","Data":"007c5bd180ddb889d72d361e90496194e7cb8cd66029096efc9980027e109ce2"} Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.588383 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" event={"ID":"c96e05b0-db9f-4670-839d-f15b53eeffc6","Type":"ContainerDied","Data":"e01a81b3ceec86cba5572ec3be38cc184c5034abe982da49a2365de12bea8647"} Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.588443 4718 scope.go:117] "RemoveContainer" containerID="007c5bd180ddb889d72d361e90496194e7cb8cd66029096efc9980027e109ce2" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.588751 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hrs87" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.590084 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.596364 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w24kn\" (UniqueName: \"kubernetes.io/projected/65a1ab25-cdad-4128-a946-070231fd85fb-kube-api-access-w24kn\") pod \"oauth-openshift-7b964c775c-ct48x\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.606480 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.631295 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hrs87"] Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.631483 4718 scope.go:117] "RemoveContainer" containerID="007c5bd180ddb889d72d361e90496194e7cb8cd66029096efc9980027e109ce2" Jan 23 16:20:18 crc kubenswrapper[4718]: E0123 16:20:18.631941 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"007c5bd180ddb889d72d361e90496194e7cb8cd66029096efc9980027e109ce2\": container with ID starting with 007c5bd180ddb889d72d361e90496194e7cb8cd66029096efc9980027e109ce2 not found: ID does not exist" containerID="007c5bd180ddb889d72d361e90496194e7cb8cd66029096efc9980027e109ce2" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.631975 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"007c5bd180ddb889d72d361e90496194e7cb8cd66029096efc9980027e109ce2"} err="failed to get container status \"007c5bd180ddb889d72d361e90496194e7cb8cd66029096efc9980027e109ce2\": rpc error: code = NotFound desc = could not find container \"007c5bd180ddb889d72d361e90496194e7cb8cd66029096efc9980027e109ce2\": container with ID starting with 007c5bd180ddb889d72d361e90496194e7cb8cd66029096efc9980027e109ce2 not found: ID does not exist" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.643084 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hrs87"] Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.920515 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9mxjz" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.920584 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9mxjz" Jan 23 16:20:18 crc kubenswrapper[4718]: I0123 16:20:18.959108 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9mxjz" Jan 23 16:20:19 crc kubenswrapper[4718]: I0123 16:20:19.033621 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b964c775c-ct48x"] Jan 23 16:20:19 crc kubenswrapper[4718]: W0123 16:20:19.041221 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65a1ab25_cdad_4128_a946_070231fd85fb.slice/crio-c77def6f43dc634ef12da2f785378e2b703a1a19ea118faeb97e57aff0dd0e66 WatchSource:0}: Error finding container c77def6f43dc634ef12da2f785378e2b703a1a19ea118faeb97e57aff0dd0e66: Status 404 returned error can't find the container with id c77def6f43dc634ef12da2f785378e2b703a1a19ea118faeb97e57aff0dd0e66 Jan 23 16:20:19 crc kubenswrapper[4718]: I0123 16:20:19.150731 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c96e05b0-db9f-4670-839d-f15b53eeffc6" path="/var/lib/kubelet/pods/c96e05b0-db9f-4670-839d-f15b53eeffc6/volumes" Jan 23 16:20:19 crc kubenswrapper[4718]: I0123 16:20:19.579054 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdbp8"] Jan 23 16:20:19 crc kubenswrapper[4718]: I0123 16:20:19.579352 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sdbp8" podUID="b8bc3317-7018-4527-a8f8-7d27072bd326" containerName="registry-server" containerID="cri-o://a7a1e7e56e56c2191745b0f3fcc82f0002f0e73ab59e0049b72c6d019fa84edf" gracePeriod=2 Jan 23 16:20:19 crc kubenswrapper[4718]: I0123 16:20:19.596121 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" event={"ID":"65a1ab25-cdad-4128-a946-070231fd85fb","Type":"ContainerStarted","Data":"c77def6f43dc634ef12da2f785378e2b703a1a19ea118faeb97e57aff0dd0e66"} Jan 23 16:20:19 crc kubenswrapper[4718]: I0123 16:20:19.636247 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9mxjz" Jan 23 16:20:19 crc kubenswrapper[4718]: I0123 16:20:19.779051 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p7jnq"] Jan 23 16:20:19 crc kubenswrapper[4718]: I0123 16:20:19.779309 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p7jnq" podUID="4b400da1-43ec-45d6-87ee-3baea6d9f22d" containerName="registry-server" containerID="cri-o://ef208ab2375e4308422629d469c0756d84be3bc5a206efdd8ee8637ff646d868" gracePeriod=2 Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.583382 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdbp8" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.606659 4718 generic.go:334] "Generic (PLEG): container finished" podID="b8bc3317-7018-4527-a8f8-7d27072bd326" containerID="a7a1e7e56e56c2191745b0f3fcc82f0002f0e73ab59e0049b72c6d019fa84edf" exitCode=0 Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.606736 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdbp8" event={"ID":"b8bc3317-7018-4527-a8f8-7d27072bd326","Type":"ContainerDied","Data":"a7a1e7e56e56c2191745b0f3fcc82f0002f0e73ab59e0049b72c6d019fa84edf"} Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.606769 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdbp8" event={"ID":"b8bc3317-7018-4527-a8f8-7d27072bd326","Type":"ContainerDied","Data":"bc104bc4202e60a5e1de8663ba1cb7d84be8b656f3f97491b5a8d605717a1886"} Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.606791 4718 scope.go:117] "RemoveContainer" containerID="a7a1e7e56e56c2191745b0f3fcc82f0002f0e73ab59e0049b72c6d019fa84edf" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.606916 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdbp8" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.608843 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" event={"ID":"65a1ab25-cdad-4128-a946-070231fd85fb","Type":"ContainerStarted","Data":"d7973a66542460db1ebb43b78a1a5d36d08eb538372ddd4c7a334f7dcda14880"} Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.610605 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.613729 4718 generic.go:334] "Generic (PLEG): container finished" podID="4b400da1-43ec-45d6-87ee-3baea6d9f22d" containerID="ef208ab2375e4308422629d469c0756d84be3bc5a206efdd8ee8637ff646d868" exitCode=0 Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.613914 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7jnq" event={"ID":"4b400da1-43ec-45d6-87ee-3baea6d9f22d","Type":"ContainerDied","Data":"ef208ab2375e4308422629d469c0756d84be3bc5a206efdd8ee8637ff646d868"} Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.618751 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.624758 4718 scope.go:117] "RemoveContainer" containerID="b7f69bab2719a11aa4311c561403c6d8f12ce38f59494021cbc650bf0f9b6be5" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.648653 4718 scope.go:117] "RemoveContainer" containerID="bbb3011c2c51eccf1fa35e966fa2b62c6be2398fdb1820fac3d3aa491e2638e8" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.667800 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" podStartSLOduration=29.667779841 podStartE2EDuration="29.667779841s" podCreationTimestamp="2026-01-23 16:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:20:20.633203471 +0000 UTC m=+221.780445482" watchObservedRunningTime="2026-01-23 16:20:20.667779841 +0000 UTC m=+221.815021832" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.685842 4718 scope.go:117] "RemoveContainer" containerID="a7a1e7e56e56c2191745b0f3fcc82f0002f0e73ab59e0049b72c6d019fa84edf" Jan 23 16:20:20 crc kubenswrapper[4718]: E0123 16:20:20.689015 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7a1e7e56e56c2191745b0f3fcc82f0002f0e73ab59e0049b72c6d019fa84edf\": container with ID starting with a7a1e7e56e56c2191745b0f3fcc82f0002f0e73ab59e0049b72c6d019fa84edf not found: ID does not exist" containerID="a7a1e7e56e56c2191745b0f3fcc82f0002f0e73ab59e0049b72c6d019fa84edf" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.689145 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a1e7e56e56c2191745b0f3fcc82f0002f0e73ab59e0049b72c6d019fa84edf"} err="failed to get container status \"a7a1e7e56e56c2191745b0f3fcc82f0002f0e73ab59e0049b72c6d019fa84edf\": rpc error: code = NotFound desc = could not find container \"a7a1e7e56e56c2191745b0f3fcc82f0002f0e73ab59e0049b72c6d019fa84edf\": container with ID starting with a7a1e7e56e56c2191745b0f3fcc82f0002f0e73ab59e0049b72c6d019fa84edf not found: ID does not exist" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.689258 4718 scope.go:117] "RemoveContainer" containerID="b7f69bab2719a11aa4311c561403c6d8f12ce38f59494021cbc650bf0f9b6be5" Jan 23 16:20:20 crc kubenswrapper[4718]: E0123 16:20:20.694901 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7f69bab2719a11aa4311c561403c6d8f12ce38f59494021cbc650bf0f9b6be5\": container with ID starting with b7f69bab2719a11aa4311c561403c6d8f12ce38f59494021cbc650bf0f9b6be5 not found: ID does not exist" containerID="b7f69bab2719a11aa4311c561403c6d8f12ce38f59494021cbc650bf0f9b6be5" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.695017 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7f69bab2719a11aa4311c561403c6d8f12ce38f59494021cbc650bf0f9b6be5"} err="failed to get container status \"b7f69bab2719a11aa4311c561403c6d8f12ce38f59494021cbc650bf0f9b6be5\": rpc error: code = NotFound desc = could not find container \"b7f69bab2719a11aa4311c561403c6d8f12ce38f59494021cbc650bf0f9b6be5\": container with ID starting with b7f69bab2719a11aa4311c561403c6d8f12ce38f59494021cbc650bf0f9b6be5 not found: ID does not exist" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.695119 4718 scope.go:117] "RemoveContainer" containerID="bbb3011c2c51eccf1fa35e966fa2b62c6be2398fdb1820fac3d3aa491e2638e8" Jan 23 16:20:20 crc kubenswrapper[4718]: E0123 16:20:20.695994 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbb3011c2c51eccf1fa35e966fa2b62c6be2398fdb1820fac3d3aa491e2638e8\": container with ID starting with bbb3011c2c51eccf1fa35e966fa2b62c6be2398fdb1820fac3d3aa491e2638e8 not found: ID does not exist" containerID="bbb3011c2c51eccf1fa35e966fa2b62c6be2398fdb1820fac3d3aa491e2638e8" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.696121 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbb3011c2c51eccf1fa35e966fa2b62c6be2398fdb1820fac3d3aa491e2638e8"} err="failed to get container status \"bbb3011c2c51eccf1fa35e966fa2b62c6be2398fdb1820fac3d3aa491e2638e8\": rpc error: code = NotFound desc = could not find container \"bbb3011c2c51eccf1fa35e966fa2b62c6be2398fdb1820fac3d3aa491e2638e8\": container with ID starting with bbb3011c2c51eccf1fa35e966fa2b62c6be2398fdb1820fac3d3aa491e2638e8 not found: ID does not exist" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.704562 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bc3317-7018-4527-a8f8-7d27072bd326-catalog-content\") pod \"b8bc3317-7018-4527-a8f8-7d27072bd326\" (UID: \"b8bc3317-7018-4527-a8f8-7d27072bd326\") " Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.704652 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bc3317-7018-4527-a8f8-7d27072bd326-utilities\") pod \"b8bc3317-7018-4527-a8f8-7d27072bd326\" (UID: \"b8bc3317-7018-4527-a8f8-7d27072bd326\") " Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.704726 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tng7d\" (UniqueName: \"kubernetes.io/projected/b8bc3317-7018-4527-a8f8-7d27072bd326-kube-api-access-tng7d\") pod \"b8bc3317-7018-4527-a8f8-7d27072bd326\" (UID: \"b8bc3317-7018-4527-a8f8-7d27072bd326\") " Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.706529 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8bc3317-7018-4527-a8f8-7d27072bd326-utilities" (OuterVolumeSpecName: "utilities") pod "b8bc3317-7018-4527-a8f8-7d27072bd326" (UID: "b8bc3317-7018-4527-a8f8-7d27072bd326"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.716400 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8bc3317-7018-4527-a8f8-7d27072bd326-kube-api-access-tng7d" (OuterVolumeSpecName: "kube-api-access-tng7d") pod "b8bc3317-7018-4527-a8f8-7d27072bd326" (UID: "b8bc3317-7018-4527-a8f8-7d27072bd326"). InnerVolumeSpecName "kube-api-access-tng7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.744791 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7jnq" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.774865 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8bc3317-7018-4527-a8f8-7d27072bd326-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8bc3317-7018-4527-a8f8-7d27072bd326" (UID: "b8bc3317-7018-4527-a8f8-7d27072bd326"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.806815 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bc3317-7018-4527-a8f8-7d27072bd326-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.806852 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tng7d\" (UniqueName: \"kubernetes.io/projected/b8bc3317-7018-4527-a8f8-7d27072bd326-kube-api-access-tng7d\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.806864 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bc3317-7018-4527-a8f8-7d27072bd326-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.907834 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b400da1-43ec-45d6-87ee-3baea6d9f22d-utilities\") pod \"4b400da1-43ec-45d6-87ee-3baea6d9f22d\" (UID: \"4b400da1-43ec-45d6-87ee-3baea6d9f22d\") " Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.907947 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b400da1-43ec-45d6-87ee-3baea6d9f22d-catalog-content\") pod \"4b400da1-43ec-45d6-87ee-3baea6d9f22d\" (UID: \"4b400da1-43ec-45d6-87ee-3baea6d9f22d\") " Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.908026 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt8th\" (UniqueName: \"kubernetes.io/projected/4b400da1-43ec-45d6-87ee-3baea6d9f22d-kube-api-access-dt8th\") pod \"4b400da1-43ec-45d6-87ee-3baea6d9f22d\" (UID: \"4b400da1-43ec-45d6-87ee-3baea6d9f22d\") " Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.908705 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b400da1-43ec-45d6-87ee-3baea6d9f22d-utilities" (OuterVolumeSpecName: "utilities") pod "4b400da1-43ec-45d6-87ee-3baea6d9f22d" (UID: "4b400da1-43ec-45d6-87ee-3baea6d9f22d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.913740 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b400da1-43ec-45d6-87ee-3baea6d9f22d-kube-api-access-dt8th" (OuterVolumeSpecName: "kube-api-access-dt8th") pod "4b400da1-43ec-45d6-87ee-3baea6d9f22d" (UID: "4b400da1-43ec-45d6-87ee-3baea6d9f22d"). InnerVolumeSpecName "kube-api-access-dt8th". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.940490 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdbp8"] Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.946129 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sdbp8"] Jan 23 16:20:20 crc kubenswrapper[4718]: I0123 16:20:20.965792 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b400da1-43ec-45d6-87ee-3baea6d9f22d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b400da1-43ec-45d6-87ee-3baea6d9f22d" (UID: "4b400da1-43ec-45d6-87ee-3baea6d9f22d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:20:21 crc kubenswrapper[4718]: I0123 16:20:21.010152 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b400da1-43ec-45d6-87ee-3baea6d9f22d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:21 crc kubenswrapper[4718]: I0123 16:20:21.010208 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b400da1-43ec-45d6-87ee-3baea6d9f22d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:21 crc kubenswrapper[4718]: I0123 16:20:21.010229 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt8th\" (UniqueName: \"kubernetes.io/projected/4b400da1-43ec-45d6-87ee-3baea6d9f22d-kube-api-access-dt8th\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:21 crc kubenswrapper[4718]: I0123 16:20:21.148537 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8bc3317-7018-4527-a8f8-7d27072bd326" path="/var/lib/kubelet/pods/b8bc3317-7018-4527-a8f8-7d27072bd326/volumes" Jan 23 16:20:21 crc kubenswrapper[4718]: I0123 16:20:21.624944 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7jnq" event={"ID":"4b400da1-43ec-45d6-87ee-3baea6d9f22d","Type":"ContainerDied","Data":"f983c62892b46c068cb917efb0da242fff0e684b8b5b384cd3a9ed1fed2c7aec"} Jan 23 16:20:21 crc kubenswrapper[4718]: I0123 16:20:21.625186 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7jnq" Jan 23 16:20:21 crc kubenswrapper[4718]: I0123 16:20:21.626414 4718 scope.go:117] "RemoveContainer" containerID="ef208ab2375e4308422629d469c0756d84be3bc5a206efdd8ee8637ff646d868" Jan 23 16:20:21 crc kubenswrapper[4718]: I0123 16:20:21.654351 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p7jnq"] Jan 23 16:20:21 crc kubenswrapper[4718]: I0123 16:20:21.660103 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p7jnq"] Jan 23 16:20:21 crc kubenswrapper[4718]: I0123 16:20:21.667265 4718 scope.go:117] "RemoveContainer" containerID="67098824ab9b442386b31fa6aaec2a7b69f1c7098c4bf60aa54427c698a32bf7" Jan 23 16:20:21 crc kubenswrapper[4718]: I0123 16:20:21.698992 4718 scope.go:117] "RemoveContainer" containerID="fdd9b5d0a8b187cefea3ea357228ba003b66175fec1f099ce5f92311c68ac477" Jan 23 16:20:21 crc kubenswrapper[4718]: I0123 16:20:21.984386 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9mxjz"] Jan 23 16:20:21 crc kubenswrapper[4718]: I0123 16:20:21.984869 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9mxjz" podUID="abdcb612-10fc-49af-97ef-a8265d36bb9d" containerName="registry-server" containerID="cri-o://abacf225f11baa8141edab010e3d612a15d6edfba90f5ab71db4255ad7371b25" gracePeriod=2 Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.449998 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9mxjz" Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.564408 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv7wc\" (UniqueName: \"kubernetes.io/projected/abdcb612-10fc-49af-97ef-a8265d36bb9d-kube-api-access-sv7wc\") pod \"abdcb612-10fc-49af-97ef-a8265d36bb9d\" (UID: \"abdcb612-10fc-49af-97ef-a8265d36bb9d\") " Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.564565 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abdcb612-10fc-49af-97ef-a8265d36bb9d-utilities\") pod \"abdcb612-10fc-49af-97ef-a8265d36bb9d\" (UID: \"abdcb612-10fc-49af-97ef-a8265d36bb9d\") " Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.564596 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abdcb612-10fc-49af-97ef-a8265d36bb9d-catalog-content\") pod \"abdcb612-10fc-49af-97ef-a8265d36bb9d\" (UID: \"abdcb612-10fc-49af-97ef-a8265d36bb9d\") " Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.565670 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abdcb612-10fc-49af-97ef-a8265d36bb9d-utilities" (OuterVolumeSpecName: "utilities") pod "abdcb612-10fc-49af-97ef-a8265d36bb9d" (UID: "abdcb612-10fc-49af-97ef-a8265d36bb9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.569800 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abdcb612-10fc-49af-97ef-a8265d36bb9d-kube-api-access-sv7wc" (OuterVolumeSpecName: "kube-api-access-sv7wc") pod "abdcb612-10fc-49af-97ef-a8265d36bb9d" (UID: "abdcb612-10fc-49af-97ef-a8265d36bb9d"). InnerVolumeSpecName "kube-api-access-sv7wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.635947 4718 generic.go:334] "Generic (PLEG): container finished" podID="abdcb612-10fc-49af-97ef-a8265d36bb9d" containerID="abacf225f11baa8141edab010e3d612a15d6edfba90f5ab71db4255ad7371b25" exitCode=0 Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.636036 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9mxjz" Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.636080 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mxjz" event={"ID":"abdcb612-10fc-49af-97ef-a8265d36bb9d","Type":"ContainerDied","Data":"abacf225f11baa8141edab010e3d612a15d6edfba90f5ab71db4255ad7371b25"} Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.636135 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mxjz" event={"ID":"abdcb612-10fc-49af-97ef-a8265d36bb9d","Type":"ContainerDied","Data":"9834edbb405abf45ddaed07c4a148c8c1bce8e8455487613dd34c46ec489a76e"} Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.636156 4718 scope.go:117] "RemoveContainer" containerID="abacf225f11baa8141edab010e3d612a15d6edfba90f5ab71db4255ad7371b25" Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.650813 4718 scope.go:117] "RemoveContainer" containerID="456769cd91e3bc3951c36c1b406d3e03f2a9fadb29a18b76cbf2816dd086a20f" Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.666062 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abdcb612-10fc-49af-97ef-a8265d36bb9d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.666157 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv7wc\" (UniqueName: \"kubernetes.io/projected/abdcb612-10fc-49af-97ef-a8265d36bb9d-kube-api-access-sv7wc\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.670797 4718 scope.go:117] "RemoveContainer" containerID="fe898d04d70516a227962dec33fbac484226b86b240161dc3ab2121511f7ab5c" Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.690909 4718 scope.go:117] "RemoveContainer" containerID="abacf225f11baa8141edab010e3d612a15d6edfba90f5ab71db4255ad7371b25" Jan 23 16:20:22 crc kubenswrapper[4718]: E0123 16:20:22.691662 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abacf225f11baa8141edab010e3d612a15d6edfba90f5ab71db4255ad7371b25\": container with ID starting with abacf225f11baa8141edab010e3d612a15d6edfba90f5ab71db4255ad7371b25 not found: ID does not exist" containerID="abacf225f11baa8141edab010e3d612a15d6edfba90f5ab71db4255ad7371b25" Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.691723 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abacf225f11baa8141edab010e3d612a15d6edfba90f5ab71db4255ad7371b25"} err="failed to get container status \"abacf225f11baa8141edab010e3d612a15d6edfba90f5ab71db4255ad7371b25\": rpc error: code = NotFound desc = could not find container \"abacf225f11baa8141edab010e3d612a15d6edfba90f5ab71db4255ad7371b25\": container with ID starting with abacf225f11baa8141edab010e3d612a15d6edfba90f5ab71db4255ad7371b25 not found: ID does not exist" Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.691758 4718 scope.go:117] "RemoveContainer" containerID="456769cd91e3bc3951c36c1b406d3e03f2a9fadb29a18b76cbf2816dd086a20f" Jan 23 16:20:22 crc kubenswrapper[4718]: E0123 16:20:22.694878 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"456769cd91e3bc3951c36c1b406d3e03f2a9fadb29a18b76cbf2816dd086a20f\": container with ID starting with 456769cd91e3bc3951c36c1b406d3e03f2a9fadb29a18b76cbf2816dd086a20f not found: ID does not exist" containerID="456769cd91e3bc3951c36c1b406d3e03f2a9fadb29a18b76cbf2816dd086a20f" Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.694925 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"456769cd91e3bc3951c36c1b406d3e03f2a9fadb29a18b76cbf2816dd086a20f"} err="failed to get container status \"456769cd91e3bc3951c36c1b406d3e03f2a9fadb29a18b76cbf2816dd086a20f\": rpc error: code = NotFound desc = could not find container \"456769cd91e3bc3951c36c1b406d3e03f2a9fadb29a18b76cbf2816dd086a20f\": container with ID starting with 456769cd91e3bc3951c36c1b406d3e03f2a9fadb29a18b76cbf2816dd086a20f not found: ID does not exist" Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.694962 4718 scope.go:117] "RemoveContainer" containerID="fe898d04d70516a227962dec33fbac484226b86b240161dc3ab2121511f7ab5c" Jan 23 16:20:22 crc kubenswrapper[4718]: E0123 16:20:22.696745 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe898d04d70516a227962dec33fbac484226b86b240161dc3ab2121511f7ab5c\": container with ID starting with fe898d04d70516a227962dec33fbac484226b86b240161dc3ab2121511f7ab5c not found: ID does not exist" containerID="fe898d04d70516a227962dec33fbac484226b86b240161dc3ab2121511f7ab5c" Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.696787 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe898d04d70516a227962dec33fbac484226b86b240161dc3ab2121511f7ab5c"} err="failed to get container status \"fe898d04d70516a227962dec33fbac484226b86b240161dc3ab2121511f7ab5c\": rpc error: code = NotFound desc = could not find container \"fe898d04d70516a227962dec33fbac484226b86b240161dc3ab2121511f7ab5c\": container with ID starting with fe898d04d70516a227962dec33fbac484226b86b240161dc3ab2121511f7ab5c not found: ID does not exist" Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.699339 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abdcb612-10fc-49af-97ef-a8265d36bb9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abdcb612-10fc-49af-97ef-a8265d36bb9d" (UID: "abdcb612-10fc-49af-97ef-a8265d36bb9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.768588 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abdcb612-10fc-49af-97ef-a8265d36bb9d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.968779 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9mxjz"] Jan 23 16:20:22 crc kubenswrapper[4718]: I0123 16:20:22.974214 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9mxjz"] Jan 23 16:20:23 crc kubenswrapper[4718]: I0123 16:20:23.149147 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b400da1-43ec-45d6-87ee-3baea6d9f22d" path="/var/lib/kubelet/pods/4b400da1-43ec-45d6-87ee-3baea6d9f22d/volumes" Jan 23 16:20:23 crc kubenswrapper[4718]: I0123 16:20:23.150116 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abdcb612-10fc-49af-97ef-a8265d36bb9d" path="/var/lib/kubelet/pods/abdcb612-10fc-49af-97ef-a8265d36bb9d/volumes" Jan 23 16:20:28 crc kubenswrapper[4718]: I0123 16:20:28.876099 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:20:28 crc kubenswrapper[4718]: I0123 16:20:28.876431 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:20:28 crc kubenswrapper[4718]: I0123 16:20:28.876488 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:20:28 crc kubenswrapper[4718]: I0123 16:20:28.876992 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 16:20:28 crc kubenswrapper[4718]: I0123 16:20:28.877080 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012" gracePeriod=600 Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.699389 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012" exitCode=0 Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.699480 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012"} Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.699922 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"68ba7028c895a9368e5bfd080b533a77a260dd92bd96940d5d645457904a6833"} Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.748961 4718 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 16:20:29 crc kubenswrapper[4718]: E0123 16:20:29.749392 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b400da1-43ec-45d6-87ee-3baea6d9f22d" containerName="extract-utilities" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.749423 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b400da1-43ec-45d6-87ee-3baea6d9f22d" containerName="extract-utilities" Jan 23 16:20:29 crc kubenswrapper[4718]: E0123 16:20:29.749446 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bc3317-7018-4527-a8f8-7d27072bd326" containerName="extract-utilities" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.749460 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bc3317-7018-4527-a8f8-7d27072bd326" containerName="extract-utilities" Jan 23 16:20:29 crc kubenswrapper[4718]: E0123 16:20:29.749490 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b400da1-43ec-45d6-87ee-3baea6d9f22d" containerName="extract-content" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.749506 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b400da1-43ec-45d6-87ee-3baea6d9f22d" containerName="extract-content" Jan 23 16:20:29 crc kubenswrapper[4718]: E0123 16:20:29.749520 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abdcb612-10fc-49af-97ef-a8265d36bb9d" containerName="extract-content" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.749534 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="abdcb612-10fc-49af-97ef-a8265d36bb9d" containerName="extract-content" Jan 23 16:20:29 crc kubenswrapper[4718]: E0123 16:20:29.749559 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bc3317-7018-4527-a8f8-7d27072bd326" containerName="registry-server" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.749572 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bc3317-7018-4527-a8f8-7d27072bd326" containerName="registry-server" Jan 23 16:20:29 crc kubenswrapper[4718]: E0123 16:20:29.749596 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b400da1-43ec-45d6-87ee-3baea6d9f22d" containerName="registry-server" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.749609 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b400da1-43ec-45d6-87ee-3baea6d9f22d" containerName="registry-server" Jan 23 16:20:29 crc kubenswrapper[4718]: E0123 16:20:29.749655 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bc3317-7018-4527-a8f8-7d27072bd326" containerName="extract-content" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.749669 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bc3317-7018-4527-a8f8-7d27072bd326" containerName="extract-content" Jan 23 16:20:29 crc kubenswrapper[4718]: E0123 16:20:29.749690 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abdcb612-10fc-49af-97ef-a8265d36bb9d" containerName="extract-utilities" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.749704 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="abdcb612-10fc-49af-97ef-a8265d36bb9d" containerName="extract-utilities" Jan 23 16:20:29 crc kubenswrapper[4718]: E0123 16:20:29.749749 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abdcb612-10fc-49af-97ef-a8265d36bb9d" containerName="registry-server" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.749763 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="abdcb612-10fc-49af-97ef-a8265d36bb9d" containerName="registry-server" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.749953 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8bc3317-7018-4527-a8f8-7d27072bd326" containerName="registry-server" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.749986 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b400da1-43ec-45d6-87ee-3baea6d9f22d" containerName="registry-server" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.750004 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="abdcb612-10fc-49af-97ef-a8265d36bb9d" containerName="registry-server" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.750541 4718 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.750589 4718 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.750669 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.751211 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44" gracePeriod=15 Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.751275 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b" gracePeriod=15 Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.751343 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c" gracePeriod=15 Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.751311 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637" gracePeriod=15 Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.751336 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a" gracePeriod=15 Jan 23 16:20:29 crc kubenswrapper[4718]: E0123 16:20:29.751443 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.751531 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 16:20:29 crc kubenswrapper[4718]: E0123 16:20:29.751571 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.751579 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 16:20:29 crc kubenswrapper[4718]: E0123 16:20:29.751592 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.751599 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 16:20:29 crc kubenswrapper[4718]: E0123 16:20:29.751612 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.751618 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 16:20:29 crc kubenswrapper[4718]: E0123 16:20:29.751654 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.751660 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 16:20:29 crc kubenswrapper[4718]: E0123 16:20:29.751675 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.751682 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 16:20:29 crc kubenswrapper[4718]: E0123 16:20:29.751693 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.751700 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.751859 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.751869 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.751878 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.751886 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.751895 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.752080 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.754482 4718 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.789369 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.885711 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.885812 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.885844 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.885867 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.886182 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.886262 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.886280 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.886400 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.987783 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.987835 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.987859 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.987879 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.987897 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.987900 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.987911 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.987947 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.987961 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.987972 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.987992 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.988000 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.988019 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.987932 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.988048 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:20:29 crc kubenswrapper[4718]: I0123 16:20:29.988050 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:30 crc kubenswrapper[4718]: I0123 16:20:30.086889 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:20:30 crc kubenswrapper[4718]: W0123 16:20:30.110742 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-e425df9e0b8bc5fa48972026b356333eb99b606adedcb2a931b310a6bf1a4e11 WatchSource:0}: Error finding container e425df9e0b8bc5fa48972026b356333eb99b606adedcb2a931b310a6bf1a4e11: Status 404 returned error can't find the container with id e425df9e0b8bc5fa48972026b356333eb99b606adedcb2a931b310a6bf1a4e11 Jan 23 16:20:30 crc kubenswrapper[4718]: E0123 16:20:30.113095 4718 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.204:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d6897deb465bc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 16:20:30.112187836 +0000 UTC m=+231.259429827,LastTimestamp:2026-01-23 16:20:30.112187836 +0000 UTC m=+231.259429827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 16:20:30 crc kubenswrapper[4718]: I0123 16:20:30.709882 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"89f7c2f71b6e577e39f07f9c1ba45db204e11329f91df3e8228ed6239eb91521"} Jan 23 16:20:30 crc kubenswrapper[4718]: I0123 16:20:30.710498 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e425df9e0b8bc5fa48972026b356333eb99b606adedcb2a931b310a6bf1a4e11"} Jan 23 16:20:30 crc kubenswrapper[4718]: I0123 16:20:30.710685 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:30 crc kubenswrapper[4718]: I0123 16:20:30.713328 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 16:20:30 crc kubenswrapper[4718]: I0123 16:20:30.714890 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 16:20:30 crc kubenswrapper[4718]: I0123 16:20:30.715863 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b" exitCode=0 Jan 23 16:20:30 crc kubenswrapper[4718]: I0123 16:20:30.715902 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637" exitCode=0 Jan 23 16:20:30 crc kubenswrapper[4718]: I0123 16:20:30.715913 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a" exitCode=0 Jan 23 16:20:30 crc kubenswrapper[4718]: I0123 16:20:30.715926 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c" exitCode=2 Jan 23 16:20:30 crc kubenswrapper[4718]: I0123 16:20:30.715970 4718 scope.go:117] "RemoveContainer" containerID="dbaa42f765e5724fd4d725662ca3cddeb71c69aef76a7aa9ba94471d737e517a" Jan 23 16:20:30 crc kubenswrapper[4718]: I0123 16:20:30.718255 4718 generic.go:334] "Generic (PLEG): container finished" podID="86dfa729-72c2-42d8-a66f-00e7b0ed98d6" containerID="c067add4a1fc22ee88a0b31d9bbc04e87358383c6ac1be335c77e45331b3ffed" exitCode=0 Jan 23 16:20:30 crc kubenswrapper[4718]: I0123 16:20:30.718319 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"86dfa729-72c2-42d8-a66f-00e7b0ed98d6","Type":"ContainerDied","Data":"c067add4a1fc22ee88a0b31d9bbc04e87358383c6ac1be335c77e45331b3ffed"} Jan 23 16:20:30 crc kubenswrapper[4718]: I0123 16:20:30.719086 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:30 crc kubenswrapper[4718]: I0123 16:20:30.719777 4718 status_manager.go:851] "Failed to get status for pod" podUID="86dfa729-72c2-42d8-a66f-00e7b0ed98d6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:31 crc kubenswrapper[4718]: I0123 16:20:31.750900 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.162590 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.164060 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.164509 4718 status_manager.go:851] "Failed to get status for pod" podUID="86dfa729-72c2-42d8-a66f-00e7b0ed98d6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.176490 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.177536 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.178609 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.179489 4718 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.180281 4718 status_manager.go:851] "Failed to get status for pod" podUID="86dfa729-72c2-42d8-a66f-00e7b0ed98d6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.321975 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.322138 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86dfa729-72c2-42d8-a66f-00e7b0ed98d6-kubelet-dir\") pod \"86dfa729-72c2-42d8-a66f-00e7b0ed98d6\" (UID: \"86dfa729-72c2-42d8-a66f-00e7b0ed98d6\") " Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.322161 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.322189 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/86dfa729-72c2-42d8-a66f-00e7b0ed98d6-var-lock\") pod \"86dfa729-72c2-42d8-a66f-00e7b0ed98d6\" (UID: \"86dfa729-72c2-42d8-a66f-00e7b0ed98d6\") " Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.322297 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86dfa729-72c2-42d8-a66f-00e7b0ed98d6-var-lock" (OuterVolumeSpecName: "var-lock") pod "86dfa729-72c2-42d8-a66f-00e7b0ed98d6" (UID: "86dfa729-72c2-42d8-a66f-00e7b0ed98d6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.322365 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86dfa729-72c2-42d8-a66f-00e7b0ed98d6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "86dfa729-72c2-42d8-a66f-00e7b0ed98d6" (UID: "86dfa729-72c2-42d8-a66f-00e7b0ed98d6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.322432 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.322501 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86dfa729-72c2-42d8-a66f-00e7b0ed98d6-kube-api-access\") pod \"86dfa729-72c2-42d8-a66f-00e7b0ed98d6\" (UID: \"86dfa729-72c2-42d8-a66f-00e7b0ed98d6\") " Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.322521 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.322770 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.323145 4718 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.323242 4718 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.323257 4718 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86dfa729-72c2-42d8-a66f-00e7b0ed98d6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.323267 4718 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/86dfa729-72c2-42d8-a66f-00e7b0ed98d6-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.323294 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.330979 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86dfa729-72c2-42d8-a66f-00e7b0ed98d6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "86dfa729-72c2-42d8-a66f-00e7b0ed98d6" (UID: "86dfa729-72c2-42d8-a66f-00e7b0ed98d6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.425213 4718 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.425701 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86dfa729-72c2-42d8-a66f-00e7b0ed98d6-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.762408 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.763271 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44" exitCode=0 Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.763393 4718 scope.go:117] "RemoveContainer" containerID="44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.763591 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.765820 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"86dfa729-72c2-42d8-a66f-00e7b0ed98d6","Type":"ContainerDied","Data":"b28bb4f162a4484efc294f324fb505efd8a8dbae6c3c6eaff6c4e0001d55cfb9"} Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.765865 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b28bb4f162a4484efc294f324fb505efd8a8dbae6c3c6eaff6c4e0001d55cfb9" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.767542 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.784153 4718 status_manager.go:851] "Failed to get status for pod" podUID="86dfa729-72c2-42d8-a66f-00e7b0ed98d6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.784420 4718 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.784608 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.787944 4718 scope.go:117] "RemoveContainer" containerID="2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.789804 4718 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.790035 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.790194 4718 status_manager.go:851] "Failed to get status for pod" podUID="86dfa729-72c2-42d8-a66f-00e7b0ed98d6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.804995 4718 scope.go:117] "RemoveContainer" containerID="e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.820379 4718 scope.go:117] "RemoveContainer" containerID="ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.834061 4718 scope.go:117] "RemoveContainer" containerID="242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.855563 4718 scope.go:117] "RemoveContainer" containerID="1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.875549 4718 scope.go:117] "RemoveContainer" containerID="44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b" Jan 23 16:20:32 crc kubenswrapper[4718]: E0123 16:20:32.876237 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\": container with ID starting with 44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b not found: ID does not exist" containerID="44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.876285 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b"} err="failed to get container status \"44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\": rpc error: code = NotFound desc = could not find container \"44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b\": container with ID starting with 44bbbe35aaa1a889c31b00665ab8b3a0a2a3e177ca1224734df8e9d5ed8f541b not found: ID does not exist" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.876312 4718 scope.go:117] "RemoveContainer" containerID="2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637" Jan 23 16:20:32 crc kubenswrapper[4718]: E0123 16:20:32.876617 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\": container with ID starting with 2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637 not found: ID does not exist" containerID="2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.876688 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637"} err="failed to get container status \"2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\": rpc error: code = NotFound desc = could not find container \"2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637\": container with ID starting with 2bd10bb56043f11941ddc6735a134d57ceb544015f6e9663e91570eb4bc73637 not found: ID does not exist" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.876730 4718 scope.go:117] "RemoveContainer" containerID="e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a" Jan 23 16:20:32 crc kubenswrapper[4718]: E0123 16:20:32.877969 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\": container with ID starting with e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a not found: ID does not exist" containerID="e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.878013 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a"} err="failed to get container status \"e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\": rpc error: code = NotFound desc = could not find container \"e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a\": container with ID starting with e4d6dccf7d04f2a836fcfa8b07cff7163e8cfdde0287b08658b4dbeada76169a not found: ID does not exist" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.878034 4718 scope.go:117] "RemoveContainer" containerID="ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c" Jan 23 16:20:32 crc kubenswrapper[4718]: E0123 16:20:32.878270 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\": container with ID starting with ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c not found: ID does not exist" containerID="ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.878299 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c"} err="failed to get container status \"ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\": rpc error: code = NotFound desc = could not find container \"ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c\": container with ID starting with ca3d8f7af7237b0ff1e9b65d07dc94617bd606dba7798e126dfe7ccee6f9d89c not found: ID does not exist" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.878320 4718 scope.go:117] "RemoveContainer" containerID="242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44" Jan 23 16:20:32 crc kubenswrapper[4718]: E0123 16:20:32.878586 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\": container with ID starting with 242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44 not found: ID does not exist" containerID="242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.878615 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44"} err="failed to get container status \"242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\": rpc error: code = NotFound desc = could not find container \"242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44\": container with ID starting with 242b4f8685e4ef801bd3d6f3fe2838be3fed5082b23fac9b40797335a71b5a44 not found: ID does not exist" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.878643 4718 scope.go:117] "RemoveContainer" containerID="1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d" Jan 23 16:20:32 crc kubenswrapper[4718]: E0123 16:20:32.879323 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\": container with ID starting with 1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d not found: ID does not exist" containerID="1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d" Jan 23 16:20:32 crc kubenswrapper[4718]: I0123 16:20:32.879395 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d"} err="failed to get container status \"1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\": rpc error: code = NotFound desc = could not find container \"1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d\": container with ID starting with 1420e35ee811a46541aa895f9df198d491ab95a064b10101ed9ca1f53213d95d not found: ID does not exist" Jan 23 16:20:33 crc kubenswrapper[4718]: I0123 16:20:33.145818 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 23 16:20:34 crc kubenswrapper[4718]: E0123 16:20:34.058863 4718 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:34 crc kubenswrapper[4718]: E0123 16:20:34.059543 4718 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:34 crc kubenswrapper[4718]: E0123 16:20:34.060082 4718 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:34 crc kubenswrapper[4718]: E0123 16:20:34.060821 4718 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:34 crc kubenswrapper[4718]: E0123 16:20:34.061368 4718 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:34 crc kubenswrapper[4718]: I0123 16:20:34.061421 4718 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 23 16:20:34 crc kubenswrapper[4718]: E0123 16:20:34.061893 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="200ms" Jan 23 16:20:34 crc kubenswrapper[4718]: E0123 16:20:34.262925 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="400ms" Jan 23 16:20:34 crc kubenswrapper[4718]: E0123 16:20:34.664182 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="800ms" Jan 23 16:20:35 crc kubenswrapper[4718]: E0123 16:20:35.464882 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="1.6s" Jan 23 16:20:37 crc kubenswrapper[4718]: E0123 16:20:37.066855 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="3.2s" Jan 23 16:20:39 crc kubenswrapper[4718]: I0123 16:20:39.154380 4718 status_manager.go:851] "Failed to get status for pod" podUID="86dfa729-72c2-42d8-a66f-00e7b0ed98d6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:39 crc kubenswrapper[4718]: I0123 16:20:39.155736 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:39 crc kubenswrapper[4718]: E0123 16:20:39.192476 4718 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.204:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d6897deb465bc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 16:20:30.112187836 +0000 UTC m=+231.259429827,LastTimestamp:2026-01-23 16:20:30.112187836 +0000 UTC m=+231.259429827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 16:20:40 crc kubenswrapper[4718]: E0123 16:20:40.268275 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="6.4s" Jan 23 16:20:41 crc kubenswrapper[4718]: I0123 16:20:41.143244 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:41 crc kubenswrapper[4718]: I0123 16:20:41.143921 4718 status_manager.go:851] "Failed to get status for pod" podUID="86dfa729-72c2-42d8-a66f-00e7b0ed98d6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:41 crc kubenswrapper[4718]: I0123 16:20:41.144146 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:41 crc kubenswrapper[4718]: I0123 16:20:41.159428 4718 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15a4e5cd-08da-45a5-a501-88d7b86682f0" Jan 23 16:20:41 crc kubenswrapper[4718]: I0123 16:20:41.159473 4718 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15a4e5cd-08da-45a5-a501-88d7b86682f0" Jan 23 16:20:41 crc kubenswrapper[4718]: E0123 16:20:41.160032 4718 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:41 crc kubenswrapper[4718]: I0123 16:20:41.160778 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:41 crc kubenswrapper[4718]: I0123 16:20:41.830201 4718 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1f6c1d23a6c9c4b7fba065ab9c203dabc6943ff3950dcfb3402963b415062d5e" exitCode=0 Jan 23 16:20:41 crc kubenswrapper[4718]: I0123 16:20:41.830336 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1f6c1d23a6c9c4b7fba065ab9c203dabc6943ff3950dcfb3402963b415062d5e"} Jan 23 16:20:41 crc kubenswrapper[4718]: I0123 16:20:41.830507 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3de9ab12d2959516d49d4607ab6cd013bdebbcebd2f350b8b3df5bd507abe267"} Jan 23 16:20:41 crc kubenswrapper[4718]: I0123 16:20:41.830846 4718 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15a4e5cd-08da-45a5-a501-88d7b86682f0" Jan 23 16:20:41 crc kubenswrapper[4718]: I0123 16:20:41.830862 4718 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15a4e5cd-08da-45a5-a501-88d7b86682f0" Jan 23 16:20:41 crc kubenswrapper[4718]: E0123 16:20:41.831239 4718 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:41 crc kubenswrapper[4718]: I0123 16:20:41.831504 4718 status_manager.go:851] "Failed to get status for pod" podUID="86dfa729-72c2-42d8-a66f-00e7b0ed98d6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:41 crc kubenswrapper[4718]: I0123 16:20:41.831938 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Jan 23 16:20:42 crc kubenswrapper[4718]: I0123 16:20:42.841303 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 16:20:42 crc kubenswrapper[4718]: I0123 16:20:42.841621 4718 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329" exitCode=1 Jan 23 16:20:42 crc kubenswrapper[4718]: I0123 16:20:42.841695 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329"} Jan 23 16:20:42 crc kubenswrapper[4718]: I0123 16:20:42.842277 4718 scope.go:117] "RemoveContainer" containerID="08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329" Jan 23 16:20:42 crc kubenswrapper[4718]: I0123 16:20:42.845722 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c3f814149a765818d9561d2e57495db1ec696fed4e0db611e23dfd63f4ea5afc"} Jan 23 16:20:42 crc kubenswrapper[4718]: I0123 16:20:42.845755 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"73080d7a10a103e277ad011865993e0b673d574e01b7386277f811e5a99b9687"} Jan 23 16:20:42 crc kubenswrapper[4718]: I0123 16:20:42.845768 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"91116869e3ad404ee10a40419b43e59ebdeb8a3e4b96d7506e04d915013400a0"} Jan 23 16:20:42 crc kubenswrapper[4718]: I0123 16:20:42.845781 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"545d1b653d888681ef5093ddcb5892684f529f981d8d1cd5b844d7f342cf725e"} Jan 23 16:20:43 crc kubenswrapper[4718]: I0123 16:20:43.529724 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:20:43 crc kubenswrapper[4718]: I0123 16:20:43.854097 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6d812e8f95dd706006f8f2e90ac90af62840cd3bbd7a2734af38422e05ecbb82"} Jan 23 16:20:43 crc kubenswrapper[4718]: I0123 16:20:43.854411 4718 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15a4e5cd-08da-45a5-a501-88d7b86682f0" Jan 23 16:20:43 crc kubenswrapper[4718]: I0123 16:20:43.854438 4718 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15a4e5cd-08da-45a5-a501-88d7b86682f0" Jan 23 16:20:43 crc kubenswrapper[4718]: I0123 16:20:43.854468 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:43 crc kubenswrapper[4718]: I0123 16:20:43.856083 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 16:20:43 crc kubenswrapper[4718]: I0123 16:20:43.856127 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"92dc1daaf0c1e5c1b44393e50cca2a13f185b5ad31afcc5133c6aaf29a5223cc"} Jan 23 16:20:46 crc kubenswrapper[4718]: I0123 16:20:46.161465 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:46 crc kubenswrapper[4718]: I0123 16:20:46.162075 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:46 crc kubenswrapper[4718]: I0123 16:20:46.172012 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:46 crc kubenswrapper[4718]: I0123 16:20:46.558606 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:20:46 crc kubenswrapper[4718]: I0123 16:20:46.568444 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:20:46 crc kubenswrapper[4718]: I0123 16:20:46.873293 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:20:48 crc kubenswrapper[4718]: I0123 16:20:48.862542 4718 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:48 crc kubenswrapper[4718]: I0123 16:20:48.884254 4718 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15a4e5cd-08da-45a5-a501-88d7b86682f0" Jan 23 16:20:48 crc kubenswrapper[4718]: I0123 16:20:48.884288 4718 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15a4e5cd-08da-45a5-a501-88d7b86682f0" Jan 23 16:20:48 crc kubenswrapper[4718]: I0123 16:20:48.888198 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:20:49 crc kubenswrapper[4718]: I0123 16:20:49.167704 4718 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="178d4a4e-be29-4719-8f4e-80f479a4a5ef" Jan 23 16:20:49 crc kubenswrapper[4718]: I0123 16:20:49.892018 4718 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15a4e5cd-08da-45a5-a501-88d7b86682f0" Jan 23 16:20:49 crc kubenswrapper[4718]: I0123 16:20:49.893227 4718 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15a4e5cd-08da-45a5-a501-88d7b86682f0" Jan 23 16:20:49 crc kubenswrapper[4718]: I0123 16:20:49.897342 4718 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="178d4a4e-be29-4719-8f4e-80f479a4a5ef" Jan 23 16:20:53 crc kubenswrapper[4718]: I0123 16:20:53.535658 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:20:55 crc kubenswrapper[4718]: I0123 16:20:55.351576 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 16:20:56 crc kubenswrapper[4718]: I0123 16:20:56.118157 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 16:20:56 crc kubenswrapper[4718]: I0123 16:20:56.543688 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 16:20:56 crc kubenswrapper[4718]: I0123 16:20:56.586391 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 16:20:58 crc kubenswrapper[4718]: I0123 16:20:58.020426 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 16:20:59 crc kubenswrapper[4718]: I0123 16:20:59.383388 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 16:20:59 crc kubenswrapper[4718]: I0123 16:20:59.409898 4718 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 16:21:00 crc kubenswrapper[4718]: I0123 16:21:00.176005 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 16:21:00 crc kubenswrapper[4718]: I0123 16:21:00.290107 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 16:21:00 crc kubenswrapper[4718]: I0123 16:21:00.534315 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 16:21:00 crc kubenswrapper[4718]: I0123 16:21:00.718409 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 16:21:00 crc kubenswrapper[4718]: I0123 16:21:00.773392 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 16:21:01 crc kubenswrapper[4718]: I0123 16:21:01.000900 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 16:21:01 crc kubenswrapper[4718]: I0123 16:21:01.015343 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 16:21:01 crc kubenswrapper[4718]: I0123 16:21:01.041211 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 16:21:01 crc kubenswrapper[4718]: I0123 16:21:01.101215 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 16:21:01 crc kubenswrapper[4718]: I0123 16:21:01.722281 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 16:21:01 crc kubenswrapper[4718]: I0123 16:21:01.980918 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 16:21:01 crc kubenswrapper[4718]: I0123 16:21:01.991416 4718 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 16:21:02 crc kubenswrapper[4718]: I0123 16:21:02.132367 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 16:21:02 crc kubenswrapper[4718]: I0123 16:21:02.341055 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 16:21:02 crc kubenswrapper[4718]: I0123 16:21:02.388804 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 16:21:02 crc kubenswrapper[4718]: I0123 16:21:02.389197 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 16:21:02 crc kubenswrapper[4718]: I0123 16:21:02.395080 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 16:21:02 crc kubenswrapper[4718]: I0123 16:21:02.482140 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 16:21:02 crc kubenswrapper[4718]: I0123 16:21:02.541896 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 16:21:02 crc kubenswrapper[4718]: I0123 16:21:02.663879 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 16:21:02 crc kubenswrapper[4718]: I0123 16:21:02.671753 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.040592 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.121583 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.133779 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.147508 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.218698 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.221392 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.245851 4718 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.254909 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=34.25487736 podStartE2EDuration="34.25487736s" podCreationTimestamp="2026-01-23 16:20:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:20:48.753092042 +0000 UTC m=+249.900334033" watchObservedRunningTime="2026-01-23 16:21:03.25487736 +0000 UTC m=+264.402119391" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.256809 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.256909 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.263129 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.295815 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.29577087 podStartE2EDuration="15.29577087s" podCreationTimestamp="2026-01-23 16:20:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:21:03.287000099 +0000 UTC m=+264.434242090" watchObservedRunningTime="2026-01-23 16:21:03.29577087 +0000 UTC m=+264.443012901" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.462788 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.529013 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.539248 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.658707 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.750676 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.762141 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.870855 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.885602 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.899606 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 16:21:03 crc kubenswrapper[4718]: I0123 16:21:03.991451 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.111832 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.161442 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.231833 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.317073 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.383326 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.401776 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.404436 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.406444 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.408318 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.474029 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.499548 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.509463 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.554494 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.580907 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.610685 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.620486 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.756165 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.808327 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.873869 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 16:21:04 crc kubenswrapper[4718]: I0123 16:21:04.881267 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.005548 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.013691 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.044835 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.056480 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.174479 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.205249 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.342413 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.359576 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.362794 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.378086 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.387900 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.546877 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.653164 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.669616 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.754574 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.758334 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.852419 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.874559 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 16:21:05 crc kubenswrapper[4718]: I0123 16:21:05.982096 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.210814 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.298696 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.377865 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.447215 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.462008 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.486928 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.578598 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.613525 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.615507 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.616514 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.665181 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.665481 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.714715 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.718917 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.756258 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.833143 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.862810 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.943034 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.943218 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.963480 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 16:21:06 crc kubenswrapper[4718]: I0123 16:21:06.975016 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 16:21:07 crc kubenswrapper[4718]: I0123 16:21:07.000979 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 16:21:07 crc kubenswrapper[4718]: I0123 16:21:07.022326 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 16:21:07 crc kubenswrapper[4718]: I0123 16:21:07.048765 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 16:21:07 crc kubenswrapper[4718]: I0123 16:21:07.243122 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 16:21:07 crc kubenswrapper[4718]: I0123 16:21:07.272724 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 16:21:07 crc kubenswrapper[4718]: I0123 16:21:07.309442 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 16:21:07 crc kubenswrapper[4718]: I0123 16:21:07.408949 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 16:21:07 crc kubenswrapper[4718]: I0123 16:21:07.439592 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 16:21:07 crc kubenswrapper[4718]: I0123 16:21:07.542977 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 16:21:07 crc kubenswrapper[4718]: I0123 16:21:07.611060 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 16:21:07 crc kubenswrapper[4718]: I0123 16:21:07.751694 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 16:21:07 crc kubenswrapper[4718]: I0123 16:21:07.835666 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 16:21:07 crc kubenswrapper[4718]: I0123 16:21:07.841922 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.053186 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.131210 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.299087 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.346444 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.426749 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.437215 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.447665 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.479195 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.533591 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.551031 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.573321 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.595673 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.619541 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.679297 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.697148 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.728384 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.779666 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.853733 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 16:21:08 crc kubenswrapper[4718]: I0123 16:21:08.904431 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.001449 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.019583 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.135967 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.140853 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.216295 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.246254 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.275475 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.315021 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.376254 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.391617 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.434951 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.435568 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.490569 4718 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.554408 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.606942 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.632879 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.671694 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.703415 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.885481 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.913444 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.921335 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.953984 4718 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.961553 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 16:21:09 crc kubenswrapper[4718]: I0123 16:21:09.988451 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.068885 4718 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.097173 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.097473 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.178944 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.188882 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.196654 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.288922 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.326351 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.351854 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.374587 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.418548 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.422988 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9xdfb"] Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.423226 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9xdfb" podUID="dace0865-fb1e-42df-b857-85285c561bb8" containerName="registry-server" containerID="cri-o://f1c0d40ad058b0781c1d8c1923e9b40b576f83f5b8cb1dea84f21f50624c8aef" gracePeriod=30 Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.434276 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-967v4"] Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.434498 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-967v4" podUID="219b51b6-4118-4212-94a2-48d6b2116112" containerName="registry-server" containerID="cri-o://e2140176cc0669f4ccbdfc94d80453fac6eeee88e399c04ebfa79c74dddbe69b" gracePeriod=30 Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.452422 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p6xg6"] Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.452754 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" podUID="2d31ba73-9659-4b08-bd23-26a4f51835bf" containerName="marketplace-operator" containerID="cri-o://3b021362fd17677bc9fc503e2748bbb07c685e5121c187da2ec14d3d93e433b9" gracePeriod=30 Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.464931 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-smckr"] Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.465275 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-smckr" podUID="8be4ad13-7119-48b8-9f6e-3848463eba75" containerName="registry-server" containerID="cri-o://f1909a38e09f0bd49a34922b5c837a9b4c9f795724a707cd28f00f993f5a91e1" gracePeriod=30 Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.475857 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jq6kd"] Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.479825 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.493926 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.509951 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.576096 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.587021 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.602846 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.617049 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.621685 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.629146 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.699763 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.742241 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.859509 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-967v4" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.942557 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xdfb" Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.997741 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/219b51b6-4118-4212-94a2-48d6b2116112-utilities\") pod \"219b51b6-4118-4212-94a2-48d6b2116112\" (UID: \"219b51b6-4118-4212-94a2-48d6b2116112\") " Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.997789 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/219b51b6-4118-4212-94a2-48d6b2116112-catalog-content\") pod \"219b51b6-4118-4212-94a2-48d6b2116112\" (UID: \"219b51b6-4118-4212-94a2-48d6b2116112\") " Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.997819 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chvdn\" (UniqueName: \"kubernetes.io/projected/219b51b6-4118-4212-94a2-48d6b2116112-kube-api-access-chvdn\") pod \"219b51b6-4118-4212-94a2-48d6b2116112\" (UID: \"219b51b6-4118-4212-94a2-48d6b2116112\") " Jan 23 16:21:10 crc kubenswrapper[4718]: I0123 16:21:10.999140 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/219b51b6-4118-4212-94a2-48d6b2116112-utilities" (OuterVolumeSpecName: "utilities") pod "219b51b6-4118-4212-94a2-48d6b2116112" (UID: "219b51b6-4118-4212-94a2-48d6b2116112"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.001650 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-smckr" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.005503 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/219b51b6-4118-4212-94a2-48d6b2116112-kube-api-access-chvdn" (OuterVolumeSpecName: "kube-api-access-chvdn") pod "219b51b6-4118-4212-94a2-48d6b2116112" (UID: "219b51b6-4118-4212-94a2-48d6b2116112"). InnerVolumeSpecName "kube-api-access-chvdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.010403 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.036052 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.057272 4718 generic.go:334] "Generic (PLEG): container finished" podID="8be4ad13-7119-48b8-9f6e-3848463eba75" containerID="f1909a38e09f0bd49a34922b5c837a9b4c9f795724a707cd28f00f993f5a91e1" exitCode=0 Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.057355 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smckr" event={"ID":"8be4ad13-7119-48b8-9f6e-3848463eba75","Type":"ContainerDied","Data":"f1909a38e09f0bd49a34922b5c837a9b4c9f795724a707cd28f00f993f5a91e1"} Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.057395 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smckr" event={"ID":"8be4ad13-7119-48b8-9f6e-3848463eba75","Type":"ContainerDied","Data":"10e380cefe8297c39e15078cc21997801f73be43233dc6b3d2363717ed53eb06"} Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.057854 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-smckr" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.057912 4718 scope.go:117] "RemoveContainer" containerID="f1909a38e09f0bd49a34922b5c837a9b4c9f795724a707cd28f00f993f5a91e1" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.062164 4718 generic.go:334] "Generic (PLEG): container finished" podID="dace0865-fb1e-42df-b857-85285c561bb8" containerID="f1c0d40ad058b0781c1d8c1923e9b40b576f83f5b8cb1dea84f21f50624c8aef" exitCode=0 Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.062246 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xdfb" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.062354 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xdfb" event={"ID":"dace0865-fb1e-42df-b857-85285c561bb8","Type":"ContainerDied","Data":"f1c0d40ad058b0781c1d8c1923e9b40b576f83f5b8cb1dea84f21f50624c8aef"} Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.062404 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xdfb" event={"ID":"dace0865-fb1e-42df-b857-85285c561bb8","Type":"ContainerDied","Data":"f8500662887908442ecb73489f2f28e4e8c264560c0fd52639028d7be8f16b7c"} Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.065337 4718 generic.go:334] "Generic (PLEG): container finished" podID="219b51b6-4118-4212-94a2-48d6b2116112" containerID="e2140176cc0669f4ccbdfc94d80453fac6eeee88e399c04ebfa79c74dddbe69b" exitCode=0 Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.065434 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-967v4" event={"ID":"219b51b6-4118-4212-94a2-48d6b2116112","Type":"ContainerDied","Data":"e2140176cc0669f4ccbdfc94d80453fac6eeee88e399c04ebfa79c74dddbe69b"} Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.065478 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-967v4" event={"ID":"219b51b6-4118-4212-94a2-48d6b2116112","Type":"ContainerDied","Data":"dce444de76d13ae5e3f9ce45eca09e86bff40641cb396d2b919244ea0ecebfe9"} Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.065569 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-967v4" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.067619 4718 generic.go:334] "Generic (PLEG): container finished" podID="2d31ba73-9659-4b08-bd23-26a4f51835bf" containerID="3b021362fd17677bc9fc503e2748bbb07c685e5121c187da2ec14d3d93e433b9" exitCode=0 Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.068068 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" event={"ID":"2d31ba73-9659-4b08-bd23-26a4f51835bf","Type":"ContainerDied","Data":"3b021362fd17677bc9fc503e2748bbb07c685e5121c187da2ec14d3d93e433b9"} Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.068297 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" event={"ID":"2d31ba73-9659-4b08-bd23-26a4f51835bf","Type":"ContainerDied","Data":"8e2af7444500b20942e79ae55146847d46a3f4025ec8b25c2962037a7169fb34"} Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.068125 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p6xg6" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.068437 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jq6kd" podUID="d44f3dd6-7295-4fb3-b29b-78dac567ffff" containerName="registry-server" containerID="cri-o://8aa740b8ba2d3482cbf5a99a64da6597e85fdf9c25f7f3bafe4108cbf95b0e68" gracePeriod=30 Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.099972 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d31ba73-9659-4b08-bd23-26a4f51835bf-marketplace-operator-metrics\") pod \"2d31ba73-9659-4b08-bd23-26a4f51835bf\" (UID: \"2d31ba73-9659-4b08-bd23-26a4f51835bf\") " Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.100117 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbmsm\" (UniqueName: \"kubernetes.io/projected/8be4ad13-7119-48b8-9f6e-3848463eba75-kube-api-access-qbmsm\") pod \"8be4ad13-7119-48b8-9f6e-3848463eba75\" (UID: \"8be4ad13-7119-48b8-9f6e-3848463eba75\") " Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.100146 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsxtb\" (UniqueName: \"kubernetes.io/projected/2d31ba73-9659-4b08-bd23-26a4f51835bf-kube-api-access-rsxtb\") pod \"2d31ba73-9659-4b08-bd23-26a4f51835bf\" (UID: \"2d31ba73-9659-4b08-bd23-26a4f51835bf\") " Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.100189 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frg4g\" (UniqueName: \"kubernetes.io/projected/dace0865-fb1e-42df-b857-85285c561bb8-kube-api-access-frg4g\") pod \"dace0865-fb1e-42df-b857-85285c561bb8\" (UID: \"dace0865-fb1e-42df-b857-85285c561bb8\") " Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.100234 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dace0865-fb1e-42df-b857-85285c561bb8-utilities\") pod \"dace0865-fb1e-42df-b857-85285c561bb8\" (UID: \"dace0865-fb1e-42df-b857-85285c561bb8\") " Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.100551 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d31ba73-9659-4b08-bd23-26a4f51835bf-marketplace-trusted-ca\") pod \"2d31ba73-9659-4b08-bd23-26a4f51835bf\" (UID: \"2d31ba73-9659-4b08-bd23-26a4f51835bf\") " Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.100577 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be4ad13-7119-48b8-9f6e-3848463eba75-utilities\") pod \"8be4ad13-7119-48b8-9f6e-3848463eba75\" (UID: \"8be4ad13-7119-48b8-9f6e-3848463eba75\") " Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.100626 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be4ad13-7119-48b8-9f6e-3848463eba75-catalog-content\") pod \"8be4ad13-7119-48b8-9f6e-3848463eba75\" (UID: \"8be4ad13-7119-48b8-9f6e-3848463eba75\") " Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.100692 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dace0865-fb1e-42df-b857-85285c561bb8-catalog-content\") pod \"dace0865-fb1e-42df-b857-85285c561bb8\" (UID: \"dace0865-fb1e-42df-b857-85285c561bb8\") " Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.100961 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/219b51b6-4118-4212-94a2-48d6b2116112-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.100977 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chvdn\" (UniqueName: \"kubernetes.io/projected/219b51b6-4118-4212-94a2-48d6b2116112-kube-api-access-chvdn\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.103598 4718 scope.go:117] "RemoveContainer" containerID="8edf710e6e7a865bfef15ca1d24acfaadb5680f101e9ab873ccfb7ca4c7fc878" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.105906 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d31ba73-9659-4b08-bd23-26a4f51835bf-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "2d31ba73-9659-4b08-bd23-26a4f51835bf" (UID: "2d31ba73-9659-4b08-bd23-26a4f51835bf"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.106734 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8be4ad13-7119-48b8-9f6e-3848463eba75-utilities" (OuterVolumeSpecName: "utilities") pod "8be4ad13-7119-48b8-9f6e-3848463eba75" (UID: "8be4ad13-7119-48b8-9f6e-3848463eba75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.107482 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dace0865-fb1e-42df-b857-85285c561bb8-utilities" (OuterVolumeSpecName: "utilities") pod "dace0865-fb1e-42df-b857-85285c561bb8" (UID: "dace0865-fb1e-42df-b857-85285c561bb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.108872 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8be4ad13-7119-48b8-9f6e-3848463eba75-kube-api-access-qbmsm" (OuterVolumeSpecName: "kube-api-access-qbmsm") pod "8be4ad13-7119-48b8-9f6e-3848463eba75" (UID: "8be4ad13-7119-48b8-9f6e-3848463eba75"). InnerVolumeSpecName "kube-api-access-qbmsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.112272 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dace0865-fb1e-42df-b857-85285c561bb8-kube-api-access-frg4g" (OuterVolumeSpecName: "kube-api-access-frg4g") pod "dace0865-fb1e-42df-b857-85285c561bb8" (UID: "dace0865-fb1e-42df-b857-85285c561bb8"). InnerVolumeSpecName "kube-api-access-frg4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.113341 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d31ba73-9659-4b08-bd23-26a4f51835bf-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "2d31ba73-9659-4b08-bd23-26a4f51835bf" (UID: "2d31ba73-9659-4b08-bd23-26a4f51835bf"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.114446 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d31ba73-9659-4b08-bd23-26a4f51835bf-kube-api-access-rsxtb" (OuterVolumeSpecName: "kube-api-access-rsxtb") pod "2d31ba73-9659-4b08-bd23-26a4f51835bf" (UID: "2d31ba73-9659-4b08-bd23-26a4f51835bf"). InnerVolumeSpecName "kube-api-access-rsxtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.127231 4718 scope.go:117] "RemoveContainer" containerID="ada1b7d3757c72f04e91bf4c862a980b9b4b57b8cc9b194a61da0cd9b8cc1848" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.129399 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.148545 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8be4ad13-7119-48b8-9f6e-3848463eba75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8be4ad13-7119-48b8-9f6e-3848463eba75" (UID: "8be4ad13-7119-48b8-9f6e-3848463eba75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.153153 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/219b51b6-4118-4212-94a2-48d6b2116112-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "219b51b6-4118-4212-94a2-48d6b2116112" (UID: "219b51b6-4118-4212-94a2-48d6b2116112"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.153739 4718 scope.go:117] "RemoveContainer" containerID="f1909a38e09f0bd49a34922b5c837a9b4c9f795724a707cd28f00f993f5a91e1" Jan 23 16:21:11 crc kubenswrapper[4718]: E0123 16:21:11.155092 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1909a38e09f0bd49a34922b5c837a9b4c9f795724a707cd28f00f993f5a91e1\": container with ID starting with f1909a38e09f0bd49a34922b5c837a9b4c9f795724a707cd28f00f993f5a91e1 not found: ID does not exist" containerID="f1909a38e09f0bd49a34922b5c837a9b4c9f795724a707cd28f00f993f5a91e1" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.155142 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1909a38e09f0bd49a34922b5c837a9b4c9f795724a707cd28f00f993f5a91e1"} err="failed to get container status \"f1909a38e09f0bd49a34922b5c837a9b4c9f795724a707cd28f00f993f5a91e1\": rpc error: code = NotFound desc = could not find container \"f1909a38e09f0bd49a34922b5c837a9b4c9f795724a707cd28f00f993f5a91e1\": container with ID starting with f1909a38e09f0bd49a34922b5c837a9b4c9f795724a707cd28f00f993f5a91e1 not found: ID does not exist" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.155196 4718 scope.go:117] "RemoveContainer" containerID="8edf710e6e7a865bfef15ca1d24acfaadb5680f101e9ab873ccfb7ca4c7fc878" Jan 23 16:21:11 crc kubenswrapper[4718]: E0123 16:21:11.155997 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8edf710e6e7a865bfef15ca1d24acfaadb5680f101e9ab873ccfb7ca4c7fc878\": container with ID starting with 8edf710e6e7a865bfef15ca1d24acfaadb5680f101e9ab873ccfb7ca4c7fc878 not found: ID does not exist" containerID="8edf710e6e7a865bfef15ca1d24acfaadb5680f101e9ab873ccfb7ca4c7fc878" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.156035 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8edf710e6e7a865bfef15ca1d24acfaadb5680f101e9ab873ccfb7ca4c7fc878"} err="failed to get container status \"8edf710e6e7a865bfef15ca1d24acfaadb5680f101e9ab873ccfb7ca4c7fc878\": rpc error: code = NotFound desc = could not find container \"8edf710e6e7a865bfef15ca1d24acfaadb5680f101e9ab873ccfb7ca4c7fc878\": container with ID starting with 8edf710e6e7a865bfef15ca1d24acfaadb5680f101e9ab873ccfb7ca4c7fc878 not found: ID does not exist" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.156069 4718 scope.go:117] "RemoveContainer" containerID="ada1b7d3757c72f04e91bf4c862a980b9b4b57b8cc9b194a61da0cd9b8cc1848" Jan 23 16:21:11 crc kubenswrapper[4718]: E0123 16:21:11.156485 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ada1b7d3757c72f04e91bf4c862a980b9b4b57b8cc9b194a61da0cd9b8cc1848\": container with ID starting with ada1b7d3757c72f04e91bf4c862a980b9b4b57b8cc9b194a61da0cd9b8cc1848 not found: ID does not exist" containerID="ada1b7d3757c72f04e91bf4c862a980b9b4b57b8cc9b194a61da0cd9b8cc1848" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.156656 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ada1b7d3757c72f04e91bf4c862a980b9b4b57b8cc9b194a61da0cd9b8cc1848"} err="failed to get container status \"ada1b7d3757c72f04e91bf4c862a980b9b4b57b8cc9b194a61da0cd9b8cc1848\": rpc error: code = NotFound desc = could not find container \"ada1b7d3757c72f04e91bf4c862a980b9b4b57b8cc9b194a61da0cd9b8cc1848\": container with ID starting with ada1b7d3757c72f04e91bf4c862a980b9b4b57b8cc9b194a61da0cd9b8cc1848 not found: ID does not exist" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.156760 4718 scope.go:117] "RemoveContainer" containerID="f1c0d40ad058b0781c1d8c1923e9b40b576f83f5b8cb1dea84f21f50624c8aef" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.172600 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dace0865-fb1e-42df-b857-85285c561bb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dace0865-fb1e-42df-b857-85285c561bb8" (UID: "dace0865-fb1e-42df-b857-85285c561bb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.189262 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.204155 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dace0865-fb1e-42df-b857-85285c561bb8-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.204358 4718 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d31ba73-9659-4b08-bd23-26a4f51835bf-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.204383 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be4ad13-7119-48b8-9f6e-3848463eba75-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.204400 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be4ad13-7119-48b8-9f6e-3848463eba75-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.204413 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dace0865-fb1e-42df-b857-85285c561bb8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.204427 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/219b51b6-4118-4212-94a2-48d6b2116112-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.204438 4718 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d31ba73-9659-4b08-bd23-26a4f51835bf-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.204456 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbmsm\" (UniqueName: \"kubernetes.io/projected/8be4ad13-7119-48b8-9f6e-3848463eba75-kube-api-access-qbmsm\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.204471 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsxtb\" (UniqueName: \"kubernetes.io/projected/2d31ba73-9659-4b08-bd23-26a4f51835bf-kube-api-access-rsxtb\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.204485 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frg4g\" (UniqueName: \"kubernetes.io/projected/dace0865-fb1e-42df-b857-85285c561bb8-kube-api-access-frg4g\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.235012 4718 scope.go:117] "RemoveContainer" containerID="0599825560b02de4f4dbe2f6eced17af89fe00a1758d5da43e968a83b3db2dab" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.260761 4718 scope.go:117] "RemoveContainer" containerID="705edfdf79362a05af397d481334409f9ceb90733aa126df078a96effb4fcfc1" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.294139 4718 scope.go:117] "RemoveContainer" containerID="f1c0d40ad058b0781c1d8c1923e9b40b576f83f5b8cb1dea84f21f50624c8aef" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.294207 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 16:21:11 crc kubenswrapper[4718]: E0123 16:21:11.294790 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1c0d40ad058b0781c1d8c1923e9b40b576f83f5b8cb1dea84f21f50624c8aef\": container with ID starting with f1c0d40ad058b0781c1d8c1923e9b40b576f83f5b8cb1dea84f21f50624c8aef not found: ID does not exist" containerID="f1c0d40ad058b0781c1d8c1923e9b40b576f83f5b8cb1dea84f21f50624c8aef" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.294837 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1c0d40ad058b0781c1d8c1923e9b40b576f83f5b8cb1dea84f21f50624c8aef"} err="failed to get container status \"f1c0d40ad058b0781c1d8c1923e9b40b576f83f5b8cb1dea84f21f50624c8aef\": rpc error: code = NotFound desc = could not find container \"f1c0d40ad058b0781c1d8c1923e9b40b576f83f5b8cb1dea84f21f50624c8aef\": container with ID starting with f1c0d40ad058b0781c1d8c1923e9b40b576f83f5b8cb1dea84f21f50624c8aef not found: ID does not exist" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.294878 4718 scope.go:117] "RemoveContainer" containerID="0599825560b02de4f4dbe2f6eced17af89fe00a1758d5da43e968a83b3db2dab" Jan 23 16:21:11 crc kubenswrapper[4718]: E0123 16:21:11.295156 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0599825560b02de4f4dbe2f6eced17af89fe00a1758d5da43e968a83b3db2dab\": container with ID starting with 0599825560b02de4f4dbe2f6eced17af89fe00a1758d5da43e968a83b3db2dab not found: ID does not exist" containerID="0599825560b02de4f4dbe2f6eced17af89fe00a1758d5da43e968a83b3db2dab" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.295174 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0599825560b02de4f4dbe2f6eced17af89fe00a1758d5da43e968a83b3db2dab"} err="failed to get container status \"0599825560b02de4f4dbe2f6eced17af89fe00a1758d5da43e968a83b3db2dab\": rpc error: code = NotFound desc = could not find container \"0599825560b02de4f4dbe2f6eced17af89fe00a1758d5da43e968a83b3db2dab\": container with ID starting with 0599825560b02de4f4dbe2f6eced17af89fe00a1758d5da43e968a83b3db2dab not found: ID does not exist" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.295191 4718 scope.go:117] "RemoveContainer" containerID="705edfdf79362a05af397d481334409f9ceb90733aa126df078a96effb4fcfc1" Jan 23 16:21:11 crc kubenswrapper[4718]: E0123 16:21:11.295660 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"705edfdf79362a05af397d481334409f9ceb90733aa126df078a96effb4fcfc1\": container with ID starting with 705edfdf79362a05af397d481334409f9ceb90733aa126df078a96effb4fcfc1 not found: ID does not exist" containerID="705edfdf79362a05af397d481334409f9ceb90733aa126df078a96effb4fcfc1" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.295705 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705edfdf79362a05af397d481334409f9ceb90733aa126df078a96effb4fcfc1"} err="failed to get container status \"705edfdf79362a05af397d481334409f9ceb90733aa126df078a96effb4fcfc1\": rpc error: code = NotFound desc = could not find container \"705edfdf79362a05af397d481334409f9ceb90733aa126df078a96effb4fcfc1\": container with ID starting with 705edfdf79362a05af397d481334409f9ceb90733aa126df078a96effb4fcfc1 not found: ID does not exist" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.295747 4718 scope.go:117] "RemoveContainer" containerID="e2140176cc0669f4ccbdfc94d80453fac6eeee88e399c04ebfa79c74dddbe69b" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.309107 4718 scope.go:117] "RemoveContainer" containerID="ea17b08532652562d38b480a211b0105a7dbce38ca386e1ab8afa0a4a852c4a4" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.324534 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.325074 4718 scope.go:117] "RemoveContainer" containerID="99c104d6a1022e7180a2310bda9b658fc011dc9b6c2725826b818d1c4d2d8713" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.360572 4718 scope.go:117] "RemoveContainer" containerID="e2140176cc0669f4ccbdfc94d80453fac6eeee88e399c04ebfa79c74dddbe69b" Jan 23 16:21:11 crc kubenswrapper[4718]: E0123 16:21:11.361898 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2140176cc0669f4ccbdfc94d80453fac6eeee88e399c04ebfa79c74dddbe69b\": container with ID starting with e2140176cc0669f4ccbdfc94d80453fac6eeee88e399c04ebfa79c74dddbe69b not found: ID does not exist" containerID="e2140176cc0669f4ccbdfc94d80453fac6eeee88e399c04ebfa79c74dddbe69b" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.361958 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2140176cc0669f4ccbdfc94d80453fac6eeee88e399c04ebfa79c74dddbe69b"} err="failed to get container status \"e2140176cc0669f4ccbdfc94d80453fac6eeee88e399c04ebfa79c74dddbe69b\": rpc error: code = NotFound desc = could not find container \"e2140176cc0669f4ccbdfc94d80453fac6eeee88e399c04ebfa79c74dddbe69b\": container with ID starting with e2140176cc0669f4ccbdfc94d80453fac6eeee88e399c04ebfa79c74dddbe69b not found: ID does not exist" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.361997 4718 scope.go:117] "RemoveContainer" containerID="ea17b08532652562d38b480a211b0105a7dbce38ca386e1ab8afa0a4a852c4a4" Jan 23 16:21:11 crc kubenswrapper[4718]: E0123 16:21:11.365812 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea17b08532652562d38b480a211b0105a7dbce38ca386e1ab8afa0a4a852c4a4\": container with ID starting with ea17b08532652562d38b480a211b0105a7dbce38ca386e1ab8afa0a4a852c4a4 not found: ID does not exist" containerID="ea17b08532652562d38b480a211b0105a7dbce38ca386e1ab8afa0a4a852c4a4" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.365854 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea17b08532652562d38b480a211b0105a7dbce38ca386e1ab8afa0a4a852c4a4"} err="failed to get container status \"ea17b08532652562d38b480a211b0105a7dbce38ca386e1ab8afa0a4a852c4a4\": rpc error: code = NotFound desc = could not find container \"ea17b08532652562d38b480a211b0105a7dbce38ca386e1ab8afa0a4a852c4a4\": container with ID starting with ea17b08532652562d38b480a211b0105a7dbce38ca386e1ab8afa0a4a852c4a4 not found: ID does not exist" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.365882 4718 scope.go:117] "RemoveContainer" containerID="99c104d6a1022e7180a2310bda9b658fc011dc9b6c2725826b818d1c4d2d8713" Jan 23 16:21:11 crc kubenswrapper[4718]: E0123 16:21:11.368560 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99c104d6a1022e7180a2310bda9b658fc011dc9b6c2725826b818d1c4d2d8713\": container with ID starting with 99c104d6a1022e7180a2310bda9b658fc011dc9b6c2725826b818d1c4d2d8713 not found: ID does not exist" containerID="99c104d6a1022e7180a2310bda9b658fc011dc9b6c2725826b818d1c4d2d8713" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.368595 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99c104d6a1022e7180a2310bda9b658fc011dc9b6c2725826b818d1c4d2d8713"} err="failed to get container status \"99c104d6a1022e7180a2310bda9b658fc011dc9b6c2725826b818d1c4d2d8713\": rpc error: code = NotFound desc = could not find container \"99c104d6a1022e7180a2310bda9b658fc011dc9b6c2725826b818d1c4d2d8713\": container with ID starting with 99c104d6a1022e7180a2310bda9b658fc011dc9b6c2725826b818d1c4d2d8713 not found: ID does not exist" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.368623 4718 scope.go:117] "RemoveContainer" containerID="3b021362fd17677bc9fc503e2748bbb07c685e5121c187da2ec14d3d93e433b9" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.370620 4718 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.370891 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://89f7c2f71b6e577e39f07f9c1ba45db204e11329f91df3e8228ed6239eb91521" gracePeriod=5 Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.383925 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-smckr"] Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.386983 4718 scope.go:117] "RemoveContainer" containerID="3b021362fd17677bc9fc503e2748bbb07c685e5121c187da2ec14d3d93e433b9" Jan 23 16:21:11 crc kubenswrapper[4718]: E0123 16:21:11.387671 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b021362fd17677bc9fc503e2748bbb07c685e5121c187da2ec14d3d93e433b9\": container with ID starting with 3b021362fd17677bc9fc503e2748bbb07c685e5121c187da2ec14d3d93e433b9 not found: ID does not exist" containerID="3b021362fd17677bc9fc503e2748bbb07c685e5121c187da2ec14d3d93e433b9" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.387717 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b021362fd17677bc9fc503e2748bbb07c685e5121c187da2ec14d3d93e433b9"} err="failed to get container status \"3b021362fd17677bc9fc503e2748bbb07c685e5121c187da2ec14d3d93e433b9\": rpc error: code = NotFound desc = could not find container \"3b021362fd17677bc9fc503e2748bbb07c685e5121c187da2ec14d3d93e433b9\": container with ID starting with 3b021362fd17677bc9fc503e2748bbb07c685e5121c187da2ec14d3d93e433b9 not found: ID does not exist" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.388449 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-smckr"] Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.394335 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p6xg6"] Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.397469 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p6xg6"] Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.403543 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9xdfb"] Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.408669 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9xdfb"] Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.421557 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-967v4"] Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.426096 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-967v4"] Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.441339 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.443964 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jq6kd" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.463920 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.474861 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.507446 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d44f3dd6-7295-4fb3-b29b-78dac567ffff-catalog-content\") pod \"d44f3dd6-7295-4fb3-b29b-78dac567ffff\" (UID: \"d44f3dd6-7295-4fb3-b29b-78dac567ffff\") " Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.507565 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp66p\" (UniqueName: \"kubernetes.io/projected/d44f3dd6-7295-4fb3-b29b-78dac567ffff-kube-api-access-pp66p\") pod \"d44f3dd6-7295-4fb3-b29b-78dac567ffff\" (UID: \"d44f3dd6-7295-4fb3-b29b-78dac567ffff\") " Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.507690 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d44f3dd6-7295-4fb3-b29b-78dac567ffff-utilities\") pod \"d44f3dd6-7295-4fb3-b29b-78dac567ffff\" (UID: \"d44f3dd6-7295-4fb3-b29b-78dac567ffff\") " Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.508559 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d44f3dd6-7295-4fb3-b29b-78dac567ffff-utilities" (OuterVolumeSpecName: "utilities") pod "d44f3dd6-7295-4fb3-b29b-78dac567ffff" (UID: "d44f3dd6-7295-4fb3-b29b-78dac567ffff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.512829 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d44f3dd6-7295-4fb3-b29b-78dac567ffff-kube-api-access-pp66p" (OuterVolumeSpecName: "kube-api-access-pp66p") pod "d44f3dd6-7295-4fb3-b29b-78dac567ffff" (UID: "d44f3dd6-7295-4fb3-b29b-78dac567ffff"). InnerVolumeSpecName "kube-api-access-pp66p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.579934 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.609109 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d44f3dd6-7295-4fb3-b29b-78dac567ffff-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.609157 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp66p\" (UniqueName: \"kubernetes.io/projected/d44f3dd6-7295-4fb3-b29b-78dac567ffff-kube-api-access-pp66p\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.649264 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d44f3dd6-7295-4fb3-b29b-78dac567ffff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d44f3dd6-7295-4fb3-b29b-78dac567ffff" (UID: "d44f3dd6-7295-4fb3-b29b-78dac567ffff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.695131 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.710112 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d44f3dd6-7295-4fb3-b29b-78dac567ffff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.712873 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.756120 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.796131 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.833775 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.837124 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 16:21:11 crc kubenswrapper[4718]: I0123 16:21:11.924712 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.079962 4718 generic.go:334] "Generic (PLEG): container finished" podID="d44f3dd6-7295-4fb3-b29b-78dac567ffff" containerID="8aa740b8ba2d3482cbf5a99a64da6597e85fdf9c25f7f3bafe4108cbf95b0e68" exitCode=0 Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.080016 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq6kd" event={"ID":"d44f3dd6-7295-4fb3-b29b-78dac567ffff","Type":"ContainerDied","Data":"8aa740b8ba2d3482cbf5a99a64da6597e85fdf9c25f7f3bafe4108cbf95b0e68"} Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.080038 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq6kd" event={"ID":"d44f3dd6-7295-4fb3-b29b-78dac567ffff","Type":"ContainerDied","Data":"463ffc4ced6371d6be9bb1a2227747f93ed96b0ebbd94f163cbfe86b5103bb68"} Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.080057 4718 scope.go:117] "RemoveContainer" containerID="8aa740b8ba2d3482cbf5a99a64da6597e85fdf9c25f7f3bafe4108cbf95b0e68" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.080129 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jq6kd" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.092160 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.097554 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.105682 4718 scope.go:117] "RemoveContainer" containerID="eff3b59b54ca541fd49622572279ec1482e3af7f9b230c5a9c7b3cde48bac34b" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.111106 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.112806 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jq6kd"] Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.116831 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jq6kd"] Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.130098 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.139648 4718 scope.go:117] "RemoveContainer" containerID="25d65aad6c379df328e014f2bb7861fbcdac21fed6f5302be71c1a6287e025b1" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.159428 4718 scope.go:117] "RemoveContainer" containerID="8aa740b8ba2d3482cbf5a99a64da6597e85fdf9c25f7f3bafe4108cbf95b0e68" Jan 23 16:21:12 crc kubenswrapper[4718]: E0123 16:21:12.160199 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aa740b8ba2d3482cbf5a99a64da6597e85fdf9c25f7f3bafe4108cbf95b0e68\": container with ID starting with 8aa740b8ba2d3482cbf5a99a64da6597e85fdf9c25f7f3bafe4108cbf95b0e68 not found: ID does not exist" containerID="8aa740b8ba2d3482cbf5a99a64da6597e85fdf9c25f7f3bafe4108cbf95b0e68" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.160244 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aa740b8ba2d3482cbf5a99a64da6597e85fdf9c25f7f3bafe4108cbf95b0e68"} err="failed to get container status \"8aa740b8ba2d3482cbf5a99a64da6597e85fdf9c25f7f3bafe4108cbf95b0e68\": rpc error: code = NotFound desc = could not find container \"8aa740b8ba2d3482cbf5a99a64da6597e85fdf9c25f7f3bafe4108cbf95b0e68\": container with ID starting with 8aa740b8ba2d3482cbf5a99a64da6597e85fdf9c25f7f3bafe4108cbf95b0e68 not found: ID does not exist" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.160276 4718 scope.go:117] "RemoveContainer" containerID="eff3b59b54ca541fd49622572279ec1482e3af7f9b230c5a9c7b3cde48bac34b" Jan 23 16:21:12 crc kubenswrapper[4718]: E0123 16:21:12.160760 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eff3b59b54ca541fd49622572279ec1482e3af7f9b230c5a9c7b3cde48bac34b\": container with ID starting with eff3b59b54ca541fd49622572279ec1482e3af7f9b230c5a9c7b3cde48bac34b not found: ID does not exist" containerID="eff3b59b54ca541fd49622572279ec1482e3af7f9b230c5a9c7b3cde48bac34b" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.160785 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eff3b59b54ca541fd49622572279ec1482e3af7f9b230c5a9c7b3cde48bac34b"} err="failed to get container status \"eff3b59b54ca541fd49622572279ec1482e3af7f9b230c5a9c7b3cde48bac34b\": rpc error: code = NotFound desc = could not find container \"eff3b59b54ca541fd49622572279ec1482e3af7f9b230c5a9c7b3cde48bac34b\": container with ID starting with eff3b59b54ca541fd49622572279ec1482e3af7f9b230c5a9c7b3cde48bac34b not found: ID does not exist" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.160802 4718 scope.go:117] "RemoveContainer" containerID="25d65aad6c379df328e014f2bb7861fbcdac21fed6f5302be71c1a6287e025b1" Jan 23 16:21:12 crc kubenswrapper[4718]: E0123 16:21:12.161219 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25d65aad6c379df328e014f2bb7861fbcdac21fed6f5302be71c1a6287e025b1\": container with ID starting with 25d65aad6c379df328e014f2bb7861fbcdac21fed6f5302be71c1a6287e025b1 not found: ID does not exist" containerID="25d65aad6c379df328e014f2bb7861fbcdac21fed6f5302be71c1a6287e025b1" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.161245 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25d65aad6c379df328e014f2bb7861fbcdac21fed6f5302be71c1a6287e025b1"} err="failed to get container status \"25d65aad6c379df328e014f2bb7861fbcdac21fed6f5302be71c1a6287e025b1\": rpc error: code = NotFound desc = could not find container \"25d65aad6c379df328e014f2bb7861fbcdac21fed6f5302be71c1a6287e025b1\": container with ID starting with 25d65aad6c379df328e014f2bb7861fbcdac21fed6f5302be71c1a6287e025b1 not found: ID does not exist" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.187775 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.305802 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.409048 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.490751 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.788203 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.821425 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.832877 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.874926 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.960111 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 16:21:12 crc kubenswrapper[4718]: I0123 16:21:12.978188 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.058200 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.152782 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="219b51b6-4118-4212-94a2-48d6b2116112" path="/var/lib/kubelet/pods/219b51b6-4118-4212-94a2-48d6b2116112/volumes" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.154268 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d31ba73-9659-4b08-bd23-26a4f51835bf" path="/var/lib/kubelet/pods/2d31ba73-9659-4b08-bd23-26a4f51835bf/volumes" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.155313 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8be4ad13-7119-48b8-9f6e-3848463eba75" path="/var/lib/kubelet/pods/8be4ad13-7119-48b8-9f6e-3848463eba75/volumes" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.157532 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d44f3dd6-7295-4fb3-b29b-78dac567ffff" path="/var/lib/kubelet/pods/d44f3dd6-7295-4fb3-b29b-78dac567ffff/volumes" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.158701 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dace0865-fb1e-42df-b857-85285c561bb8" path="/var/lib/kubelet/pods/dace0865-fb1e-42df-b857-85285c561bb8/volumes" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.162310 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.197967 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.202537 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.235965 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.256076 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.352783 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.397252 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.493458 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.571224 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.680594 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.681164 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.767342 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 16:21:13 crc kubenswrapper[4718]: I0123 16:21:13.819490 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 16:21:14 crc kubenswrapper[4718]: I0123 16:21:14.154835 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 16:21:14 crc kubenswrapper[4718]: I0123 16:21:14.157494 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 16:21:14 crc kubenswrapper[4718]: I0123 16:21:14.223671 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 16:21:14 crc kubenswrapper[4718]: I0123 16:21:14.304451 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 16:21:14 crc kubenswrapper[4718]: I0123 16:21:14.381609 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 16:21:14 crc kubenswrapper[4718]: I0123 16:21:14.410091 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 16:21:14 crc kubenswrapper[4718]: I0123 16:21:14.483349 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 16:21:14 crc kubenswrapper[4718]: I0123 16:21:14.552671 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 16:21:14 crc kubenswrapper[4718]: I0123 16:21:14.655900 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 16:21:14 crc kubenswrapper[4718]: I0123 16:21:14.722572 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 16:21:14 crc kubenswrapper[4718]: I0123 16:21:14.918898 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.082186 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.140845 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.282867 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.350598 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.457493 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7bzfg"] Jan 23 16:21:15 crc kubenswrapper[4718]: E0123 16:21:15.457712 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be4ad13-7119-48b8-9f6e-3848463eba75" containerName="extract-utilities" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.457725 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be4ad13-7119-48b8-9f6e-3848463eba75" containerName="extract-utilities" Jan 23 16:21:15 crc kubenswrapper[4718]: E0123 16:21:15.457736 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.457742 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 16:21:15 crc kubenswrapper[4718]: E0123 16:21:15.457749 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44f3dd6-7295-4fb3-b29b-78dac567ffff" containerName="extract-content" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.457755 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44f3dd6-7295-4fb3-b29b-78dac567ffff" containerName="extract-content" Jan 23 16:21:15 crc kubenswrapper[4718]: E0123 16:21:15.457763 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219b51b6-4118-4212-94a2-48d6b2116112" containerName="extract-content" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.457769 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="219b51b6-4118-4212-94a2-48d6b2116112" containerName="extract-content" Jan 23 16:21:15 crc kubenswrapper[4718]: E0123 16:21:15.457779 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d31ba73-9659-4b08-bd23-26a4f51835bf" containerName="marketplace-operator" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.457785 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d31ba73-9659-4b08-bd23-26a4f51835bf" containerName="marketplace-operator" Jan 23 16:21:15 crc kubenswrapper[4718]: E0123 16:21:15.457796 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dace0865-fb1e-42df-b857-85285c561bb8" containerName="extract-content" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.457802 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="dace0865-fb1e-42df-b857-85285c561bb8" containerName="extract-content" Jan 23 16:21:15 crc kubenswrapper[4718]: E0123 16:21:15.457812 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dace0865-fb1e-42df-b857-85285c561bb8" containerName="registry-server" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.457819 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="dace0865-fb1e-42df-b857-85285c561bb8" containerName="registry-server" Jan 23 16:21:15 crc kubenswrapper[4718]: E0123 16:21:15.457825 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86dfa729-72c2-42d8-a66f-00e7b0ed98d6" containerName="installer" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.457831 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="86dfa729-72c2-42d8-a66f-00e7b0ed98d6" containerName="installer" Jan 23 16:21:15 crc kubenswrapper[4718]: E0123 16:21:15.457839 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be4ad13-7119-48b8-9f6e-3848463eba75" containerName="registry-server" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.457845 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be4ad13-7119-48b8-9f6e-3848463eba75" containerName="registry-server" Jan 23 16:21:15 crc kubenswrapper[4718]: E0123 16:21:15.457853 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dace0865-fb1e-42df-b857-85285c561bb8" containerName="extract-utilities" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.457858 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="dace0865-fb1e-42df-b857-85285c561bb8" containerName="extract-utilities" Jan 23 16:21:15 crc kubenswrapper[4718]: E0123 16:21:15.457867 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219b51b6-4118-4212-94a2-48d6b2116112" containerName="registry-server" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.457875 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="219b51b6-4118-4212-94a2-48d6b2116112" containerName="registry-server" Jan 23 16:21:15 crc kubenswrapper[4718]: E0123 16:21:15.457882 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219b51b6-4118-4212-94a2-48d6b2116112" containerName="extract-utilities" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.457888 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="219b51b6-4118-4212-94a2-48d6b2116112" containerName="extract-utilities" Jan 23 16:21:15 crc kubenswrapper[4718]: E0123 16:21:15.457897 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44f3dd6-7295-4fb3-b29b-78dac567ffff" containerName="extract-utilities" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.457902 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44f3dd6-7295-4fb3-b29b-78dac567ffff" containerName="extract-utilities" Jan 23 16:21:15 crc kubenswrapper[4718]: E0123 16:21:15.457908 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be4ad13-7119-48b8-9f6e-3848463eba75" containerName="extract-content" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.457914 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be4ad13-7119-48b8-9f6e-3848463eba75" containerName="extract-content" Jan 23 16:21:15 crc kubenswrapper[4718]: E0123 16:21:15.457922 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44f3dd6-7295-4fb3-b29b-78dac567ffff" containerName="registry-server" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.457928 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44f3dd6-7295-4fb3-b29b-78dac567ffff" containerName="registry-server" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.458370 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d31ba73-9659-4b08-bd23-26a4f51835bf" containerName="marketplace-operator" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.458390 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8be4ad13-7119-48b8-9f6e-3848463eba75" containerName="registry-server" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.458399 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.458409 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="219b51b6-4118-4212-94a2-48d6b2116112" containerName="registry-server" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.458422 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="86dfa729-72c2-42d8-a66f-00e7b0ed98d6" containerName="installer" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.458431 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d44f3dd6-7295-4fb3-b29b-78dac567ffff" containerName="registry-server" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.458440 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="dace0865-fb1e-42df-b857-85285c561bb8" containerName="registry-server" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.458835 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.460701 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.462325 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.462675 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.463400 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.470063 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7bzfg"] Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.472715 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.567362 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7sz9\" (UniqueName: \"kubernetes.io/projected/ad5b2aea-ec41-49cb-ac4b-0497fed12dab-kube-api-access-g7sz9\") pod \"marketplace-operator-79b997595-7bzfg\" (UID: \"ad5b2aea-ec41-49cb-ac4b-0497fed12dab\") " pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.567862 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad5b2aea-ec41-49cb-ac4b-0497fed12dab-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7bzfg\" (UID: \"ad5b2aea-ec41-49cb-ac4b-0497fed12dab\") " pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.567998 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ad5b2aea-ec41-49cb-ac4b-0497fed12dab-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7bzfg\" (UID: \"ad5b2aea-ec41-49cb-ac4b-0497fed12dab\") " pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.669001 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7sz9\" (UniqueName: \"kubernetes.io/projected/ad5b2aea-ec41-49cb-ac4b-0497fed12dab-kube-api-access-g7sz9\") pod \"marketplace-operator-79b997595-7bzfg\" (UID: \"ad5b2aea-ec41-49cb-ac4b-0497fed12dab\") " pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.669089 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad5b2aea-ec41-49cb-ac4b-0497fed12dab-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7bzfg\" (UID: \"ad5b2aea-ec41-49cb-ac4b-0497fed12dab\") " pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.669123 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ad5b2aea-ec41-49cb-ac4b-0497fed12dab-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7bzfg\" (UID: \"ad5b2aea-ec41-49cb-ac4b-0497fed12dab\") " pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.672592 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad5b2aea-ec41-49cb-ac4b-0497fed12dab-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7bzfg\" (UID: \"ad5b2aea-ec41-49cb-ac4b-0497fed12dab\") " pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.678556 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ad5b2aea-ec41-49cb-ac4b-0497fed12dab-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7bzfg\" (UID: \"ad5b2aea-ec41-49cb-ac4b-0497fed12dab\") " pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.696147 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7sz9\" (UniqueName: \"kubernetes.io/projected/ad5b2aea-ec41-49cb-ac4b-0497fed12dab-kube-api-access-g7sz9\") pod \"marketplace-operator-79b997595-7bzfg\" (UID: \"ad5b2aea-ec41-49cb-ac4b-0497fed12dab\") " pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" Jan 23 16:21:15 crc kubenswrapper[4718]: I0123 16:21:15.780907 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.022193 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.042546 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.239435 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.427409 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7bzfg"] Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.531781 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.531866 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.590745 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.684177 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.684323 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.684798 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.685029 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.685063 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.685102 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.685139 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.685153 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.685229 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.685824 4718 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.685855 4718 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.685867 4718 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.685881 4718 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.701200 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.787204 4718 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:16 crc kubenswrapper[4718]: I0123 16:21:16.806572 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.077896 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.150610 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.150944 4718 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.153646 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.153708 4718 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="89f7c2f71b6e577e39f07f9c1ba45db204e11329f91df3e8228ed6239eb91521" exitCode=137 Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.153813 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.160953 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.161003 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.161014 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" event={"ID":"ad5b2aea-ec41-49cb-ac4b-0497fed12dab","Type":"ContainerStarted","Data":"40526d359d65dfbe4c1eb01659505f0d50b2dafdc40139cae4984cbade47e88e"} Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.161031 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" event={"ID":"ad5b2aea-ec41-49cb-ac4b-0497fed12dab","Type":"ContainerStarted","Data":"b53040362c7e2702d1423097620141f527653bb916dd08b4122e8cf0eec98925"} Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.161059 4718 scope.go:117] "RemoveContainer" containerID="89f7c2f71b6e577e39f07f9c1ba45db204e11329f91df3e8228ed6239eb91521" Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.162370 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.162416 4718 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="75c18826-e706-4ba8-b430-9e368ac66521" Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.165727 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.165766 4718 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="75c18826-e706-4ba8-b430-9e368ac66521" Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.170362 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" podStartSLOduration=7.170348838 podStartE2EDuration="7.170348838s" podCreationTimestamp="2026-01-23 16:21:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:21:17.169057003 +0000 UTC m=+278.316299004" watchObservedRunningTime="2026-01-23 16:21:17.170348838 +0000 UTC m=+278.317590829" Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.190538 4718 scope.go:117] "RemoveContainer" containerID="89f7c2f71b6e577e39f07f9c1ba45db204e11329f91df3e8228ed6239eb91521" Jan 23 16:21:17 crc kubenswrapper[4718]: E0123 16:21:17.193052 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89f7c2f71b6e577e39f07f9c1ba45db204e11329f91df3e8228ed6239eb91521\": container with ID starting with 89f7c2f71b6e577e39f07f9c1ba45db204e11329f91df3e8228ed6239eb91521 not found: ID does not exist" containerID="89f7c2f71b6e577e39f07f9c1ba45db204e11329f91df3e8228ed6239eb91521" Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.193128 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89f7c2f71b6e577e39f07f9c1ba45db204e11329f91df3e8228ed6239eb91521"} err="failed to get container status \"89f7c2f71b6e577e39f07f9c1ba45db204e11329f91df3e8228ed6239eb91521\": rpc error: code = NotFound desc = could not find container \"89f7c2f71b6e577e39f07f9c1ba45db204e11329f91df3e8228ed6239eb91521\": container with ID starting with 89f7c2f71b6e577e39f07f9c1ba45db204e11329f91df3e8228ed6239eb91521 not found: ID does not exist" Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.713729 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 16:21:17 crc kubenswrapper[4718]: I0123 16:21:17.951102 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 16:21:18 crc kubenswrapper[4718]: I0123 16:21:18.504131 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.164783 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bvjhk"] Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.165933 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" podUID="b2c947b7-81d9-4041-9b78-668f44427eb9" containerName="controller-manager" containerID="cri-o://0aaf098ba2d1be22e1b7c2aeb8663fc00fe645ebe0d5c2f5c30d1a5a7167b976" gracePeriod=30 Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.269529 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf"] Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.302088 4718 generic.go:334] "Generic (PLEG): container finished" podID="b2c947b7-81d9-4041-9b78-668f44427eb9" containerID="0aaf098ba2d1be22e1b7c2aeb8663fc00fe645ebe0d5c2f5c30d1a5a7167b976" exitCode=0 Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.302205 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" event={"ID":"b2c947b7-81d9-4041-9b78-668f44427eb9","Type":"ContainerDied","Data":"0aaf098ba2d1be22e1b7c2aeb8663fc00fe645ebe0d5c2f5c30d1a5a7167b976"} Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.302351 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" podUID="be7521bc-a519-439d-9ae3-4fb10368e494" containerName="route-controller-manager" containerID="cri-o://986b796061db417c4c77de7c3457d2481dd84c0101ff9316c2f9e6f9ed8e2677" gracePeriod=30 Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.406120 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5cz87"] Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.407035 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5cz87" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.410544 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.428846 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5cz87"] Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.469394 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b24d924-160f-42a8-a416-54d63a814db4-utilities\") pod \"community-operators-5cz87\" (UID: \"1b24d924-160f-42a8-a416-54d63a814db4\") " pod="openshift-marketplace/community-operators-5cz87" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.469467 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlbww\" (UniqueName: \"kubernetes.io/projected/1b24d924-160f-42a8-a416-54d63a814db4-kube-api-access-wlbww\") pod \"community-operators-5cz87\" (UID: \"1b24d924-160f-42a8-a416-54d63a814db4\") " pod="openshift-marketplace/community-operators-5cz87" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.469792 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b24d924-160f-42a8-a416-54d63a814db4-catalog-content\") pod \"community-operators-5cz87\" (UID: \"1b24d924-160f-42a8-a416-54d63a814db4\") " pod="openshift-marketplace/community-operators-5cz87" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.548418 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.575567 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b24d924-160f-42a8-a416-54d63a814db4-utilities\") pod \"community-operators-5cz87\" (UID: \"1b24d924-160f-42a8-a416-54d63a814db4\") " pod="openshift-marketplace/community-operators-5cz87" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.575661 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlbww\" (UniqueName: \"kubernetes.io/projected/1b24d924-160f-42a8-a416-54d63a814db4-kube-api-access-wlbww\") pod \"community-operators-5cz87\" (UID: \"1b24d924-160f-42a8-a416-54d63a814db4\") " pod="openshift-marketplace/community-operators-5cz87" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.575720 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b24d924-160f-42a8-a416-54d63a814db4-catalog-content\") pod \"community-operators-5cz87\" (UID: \"1b24d924-160f-42a8-a416-54d63a814db4\") " pod="openshift-marketplace/community-operators-5cz87" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.578909 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b24d924-160f-42a8-a416-54d63a814db4-catalog-content\") pod \"community-operators-5cz87\" (UID: \"1b24d924-160f-42a8-a416-54d63a814db4\") " pod="openshift-marketplace/community-operators-5cz87" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.578938 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b24d924-160f-42a8-a416-54d63a814db4-utilities\") pod \"community-operators-5cz87\" (UID: \"1b24d924-160f-42a8-a416-54d63a814db4\") " pod="openshift-marketplace/community-operators-5cz87" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.601523 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlbww\" (UniqueName: \"kubernetes.io/projected/1b24d924-160f-42a8-a416-54d63a814db4-kube-api-access-wlbww\") pod \"community-operators-5cz87\" (UID: \"1b24d924-160f-42a8-a416-54d63a814db4\") " pod="openshift-marketplace/community-operators-5cz87" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.603226 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d2rjh"] Jan 23 16:21:36 crc kubenswrapper[4718]: E0123 16:21:36.603478 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c947b7-81d9-4041-9b78-668f44427eb9" containerName="controller-manager" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.603493 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c947b7-81d9-4041-9b78-668f44427eb9" containerName="controller-manager" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.603576 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2c947b7-81d9-4041-9b78-668f44427eb9" containerName="controller-manager" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.604314 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2rjh" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.611949 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.630462 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2rjh"] Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.670395 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.676945 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c947b7-81d9-4041-9b78-668f44427eb9-config\") pod \"b2c947b7-81d9-4041-9b78-668f44427eb9\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.677025 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c947b7-81d9-4041-9b78-668f44427eb9-serving-cert\") pod \"b2c947b7-81d9-4041-9b78-668f44427eb9\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.677093 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2c947b7-81d9-4041-9b78-668f44427eb9-proxy-ca-bundles\") pod \"b2c947b7-81d9-4041-9b78-668f44427eb9\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.677133 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2cl4\" (UniqueName: \"kubernetes.io/projected/b2c947b7-81d9-4041-9b78-668f44427eb9-kube-api-access-g2cl4\") pod \"b2c947b7-81d9-4041-9b78-668f44427eb9\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.677283 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2c947b7-81d9-4041-9b78-668f44427eb9-client-ca\") pod \"b2c947b7-81d9-4041-9b78-668f44427eb9\" (UID: \"b2c947b7-81d9-4041-9b78-668f44427eb9\") " Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.677488 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bec1314-07f0-4c53-bad8-5eeb60b833a3-utilities\") pod \"certified-operators-d2rjh\" (UID: \"2bec1314-07f0-4c53-bad8-5eeb60b833a3\") " pod="openshift-marketplace/certified-operators-d2rjh" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.677548 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2kmh\" (UniqueName: \"kubernetes.io/projected/2bec1314-07f0-4c53-bad8-5eeb60b833a3-kube-api-access-f2kmh\") pod \"certified-operators-d2rjh\" (UID: \"2bec1314-07f0-4c53-bad8-5eeb60b833a3\") " pod="openshift-marketplace/certified-operators-d2rjh" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.677605 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bec1314-07f0-4c53-bad8-5eeb60b833a3-catalog-content\") pod \"certified-operators-d2rjh\" (UID: \"2bec1314-07f0-4c53-bad8-5eeb60b833a3\") " pod="openshift-marketplace/certified-operators-d2rjh" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.678251 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2c947b7-81d9-4041-9b78-668f44427eb9-client-ca" (OuterVolumeSpecName: "client-ca") pod "b2c947b7-81d9-4041-9b78-668f44427eb9" (UID: "b2c947b7-81d9-4041-9b78-668f44427eb9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.678608 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2c947b7-81d9-4041-9b78-668f44427eb9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b2c947b7-81d9-4041-9b78-668f44427eb9" (UID: "b2c947b7-81d9-4041-9b78-668f44427eb9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.679663 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2c947b7-81d9-4041-9b78-668f44427eb9-config" (OuterVolumeSpecName: "config") pod "b2c947b7-81d9-4041-9b78-668f44427eb9" (UID: "b2c947b7-81d9-4041-9b78-668f44427eb9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.680618 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c947b7-81d9-4041-9b78-668f44427eb9-kube-api-access-g2cl4" (OuterVolumeSpecName: "kube-api-access-g2cl4") pod "b2c947b7-81d9-4041-9b78-668f44427eb9" (UID: "b2c947b7-81d9-4041-9b78-668f44427eb9"). InnerVolumeSpecName "kube-api-access-g2cl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.680773 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c947b7-81d9-4041-9b78-668f44427eb9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b2c947b7-81d9-4041-9b78-668f44427eb9" (UID: "b2c947b7-81d9-4041-9b78-668f44427eb9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.754596 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5cz87" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.778551 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be7521bc-a519-439d-9ae3-4fb10368e494-client-ca\") pod \"be7521bc-a519-439d-9ae3-4fb10368e494\" (UID: \"be7521bc-a519-439d-9ae3-4fb10368e494\") " Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.778653 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44t8v\" (UniqueName: \"kubernetes.io/projected/be7521bc-a519-439d-9ae3-4fb10368e494-kube-api-access-44t8v\") pod \"be7521bc-a519-439d-9ae3-4fb10368e494\" (UID: \"be7521bc-a519-439d-9ae3-4fb10368e494\") " Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.778718 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7521bc-a519-439d-9ae3-4fb10368e494-config\") pod \"be7521bc-a519-439d-9ae3-4fb10368e494\" (UID: \"be7521bc-a519-439d-9ae3-4fb10368e494\") " Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.778782 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be7521bc-a519-439d-9ae3-4fb10368e494-serving-cert\") pod \"be7521bc-a519-439d-9ae3-4fb10368e494\" (UID: \"be7521bc-a519-439d-9ae3-4fb10368e494\") " Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.778903 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bec1314-07f0-4c53-bad8-5eeb60b833a3-catalog-content\") pod \"certified-operators-d2rjh\" (UID: \"2bec1314-07f0-4c53-bad8-5eeb60b833a3\") " pod="openshift-marketplace/certified-operators-d2rjh" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.778957 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bec1314-07f0-4c53-bad8-5eeb60b833a3-utilities\") pod \"certified-operators-d2rjh\" (UID: \"2bec1314-07f0-4c53-bad8-5eeb60b833a3\") " pod="openshift-marketplace/certified-operators-d2rjh" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.779011 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2kmh\" (UniqueName: \"kubernetes.io/projected/2bec1314-07f0-4c53-bad8-5eeb60b833a3-kube-api-access-f2kmh\") pod \"certified-operators-d2rjh\" (UID: \"2bec1314-07f0-4c53-bad8-5eeb60b833a3\") " pod="openshift-marketplace/certified-operators-d2rjh" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.779064 4718 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2c947b7-81d9-4041-9b78-668f44427eb9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.779074 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c947b7-81d9-4041-9b78-668f44427eb9-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.779085 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c947b7-81d9-4041-9b78-668f44427eb9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.779093 4718 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2c947b7-81d9-4041-9b78-668f44427eb9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.779102 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2cl4\" (UniqueName: \"kubernetes.io/projected/b2c947b7-81d9-4041-9b78-668f44427eb9-kube-api-access-g2cl4\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.779406 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be7521bc-a519-439d-9ae3-4fb10368e494-config" (OuterVolumeSpecName: "config") pod "be7521bc-a519-439d-9ae3-4fb10368e494" (UID: "be7521bc-a519-439d-9ae3-4fb10368e494"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.779611 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be7521bc-a519-439d-9ae3-4fb10368e494-client-ca" (OuterVolumeSpecName: "client-ca") pod "be7521bc-a519-439d-9ae3-4fb10368e494" (UID: "be7521bc-a519-439d-9ae3-4fb10368e494"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.780219 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bec1314-07f0-4c53-bad8-5eeb60b833a3-utilities\") pod \"certified-operators-d2rjh\" (UID: \"2bec1314-07f0-4c53-bad8-5eeb60b833a3\") " pod="openshift-marketplace/certified-operators-d2rjh" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.780223 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bec1314-07f0-4c53-bad8-5eeb60b833a3-catalog-content\") pod \"certified-operators-d2rjh\" (UID: \"2bec1314-07f0-4c53-bad8-5eeb60b833a3\") " pod="openshift-marketplace/certified-operators-d2rjh" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.785017 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be7521bc-a519-439d-9ae3-4fb10368e494-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "be7521bc-a519-439d-9ae3-4fb10368e494" (UID: "be7521bc-a519-439d-9ae3-4fb10368e494"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.785468 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be7521bc-a519-439d-9ae3-4fb10368e494-kube-api-access-44t8v" (OuterVolumeSpecName: "kube-api-access-44t8v") pod "be7521bc-a519-439d-9ae3-4fb10368e494" (UID: "be7521bc-a519-439d-9ae3-4fb10368e494"). InnerVolumeSpecName "kube-api-access-44t8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.797261 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2kmh\" (UniqueName: \"kubernetes.io/projected/2bec1314-07f0-4c53-bad8-5eeb60b833a3-kube-api-access-f2kmh\") pod \"certified-operators-d2rjh\" (UID: \"2bec1314-07f0-4c53-bad8-5eeb60b833a3\") " pod="openshift-marketplace/certified-operators-d2rjh" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.884436 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be7521bc-a519-439d-9ae3-4fb10368e494-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.884490 4718 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be7521bc-a519-439d-9ae3-4fb10368e494-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.884506 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44t8v\" (UniqueName: \"kubernetes.io/projected/be7521bc-a519-439d-9ae3-4fb10368e494-kube-api-access-44t8v\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.884523 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7521bc-a519-439d-9ae3-4fb10368e494-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.937741 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2rjh" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.998646 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p"] Jan 23 16:21:36 crc kubenswrapper[4718]: E0123 16:21:36.998858 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be7521bc-a519-439d-9ae3-4fb10368e494" containerName="route-controller-manager" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.998871 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="be7521bc-a519-439d-9ae3-4fb10368e494" containerName="route-controller-manager" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.998961 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="be7521bc-a519-439d-9ae3-4fb10368e494" containerName="route-controller-manager" Jan 23 16:21:36 crc kubenswrapper[4718]: I0123 16:21:36.999301 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.016079 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p"] Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.060536 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz"] Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.061355 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.082396 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz"] Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.089305 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8rw8\" (UniqueName: \"kubernetes.io/projected/0aefccb3-bec0-40ca-bd49-5d18b5df30fa-kube-api-access-n8rw8\") pod \"controller-manager-dcd9cb8d6-gtl5p\" (UID: \"0aefccb3-bec0-40ca-bd49-5d18b5df30fa\") " pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.089371 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aefccb3-bec0-40ca-bd49-5d18b5df30fa-config\") pod \"controller-manager-dcd9cb8d6-gtl5p\" (UID: \"0aefccb3-bec0-40ca-bd49-5d18b5df30fa\") " pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.089445 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aefccb3-bec0-40ca-bd49-5d18b5df30fa-client-ca\") pod \"controller-manager-dcd9cb8d6-gtl5p\" (UID: \"0aefccb3-bec0-40ca-bd49-5d18b5df30fa\") " pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.089467 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aefccb3-bec0-40ca-bd49-5d18b5df30fa-serving-cert\") pod \"controller-manager-dcd9cb8d6-gtl5p\" (UID: \"0aefccb3-bec0-40ca-bd49-5d18b5df30fa\") " pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.089500 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0aefccb3-bec0-40ca-bd49-5d18b5df30fa-proxy-ca-bundles\") pod \"controller-manager-dcd9cb8d6-gtl5p\" (UID: \"0aefccb3-bec0-40ca-bd49-5d18b5df30fa\") " pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.152007 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz"] Jan 23 16:21:37 crc kubenswrapper[4718]: E0123 16:21:37.152464 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-82lr6 serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz" podUID="6937bd73-3dcf-46cc-a634-ebbd6a23a4e9" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.191788 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0aefccb3-bec0-40ca-bd49-5d18b5df30fa-proxy-ca-bundles\") pod \"controller-manager-dcd9cb8d6-gtl5p\" (UID: \"0aefccb3-bec0-40ca-bd49-5d18b5df30fa\") " pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.191873 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-config\") pod \"route-controller-manager-57667fdb67-5wktz\" (UID: \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.191927 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8rw8\" (UniqueName: \"kubernetes.io/projected/0aefccb3-bec0-40ca-bd49-5d18b5df30fa-kube-api-access-n8rw8\") pod \"controller-manager-dcd9cb8d6-gtl5p\" (UID: \"0aefccb3-bec0-40ca-bd49-5d18b5df30fa\") " pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.191967 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aefccb3-bec0-40ca-bd49-5d18b5df30fa-config\") pod \"controller-manager-dcd9cb8d6-gtl5p\" (UID: \"0aefccb3-bec0-40ca-bd49-5d18b5df30fa\") " pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.192009 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-serving-cert\") pod \"route-controller-manager-57667fdb67-5wktz\" (UID: \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.192052 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82lr6\" (UniqueName: \"kubernetes.io/projected/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-kube-api-access-82lr6\") pod \"route-controller-manager-57667fdb67-5wktz\" (UID: \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.192074 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-client-ca\") pod \"route-controller-manager-57667fdb67-5wktz\" (UID: \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.192135 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aefccb3-bec0-40ca-bd49-5d18b5df30fa-client-ca\") pod \"controller-manager-dcd9cb8d6-gtl5p\" (UID: \"0aefccb3-bec0-40ca-bd49-5d18b5df30fa\") " pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.192165 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aefccb3-bec0-40ca-bd49-5d18b5df30fa-serving-cert\") pod \"controller-manager-dcd9cb8d6-gtl5p\" (UID: \"0aefccb3-bec0-40ca-bd49-5d18b5df30fa\") " pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.193747 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aefccb3-bec0-40ca-bd49-5d18b5df30fa-config\") pod \"controller-manager-dcd9cb8d6-gtl5p\" (UID: \"0aefccb3-bec0-40ca-bd49-5d18b5df30fa\") " pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.193856 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aefccb3-bec0-40ca-bd49-5d18b5df30fa-client-ca\") pod \"controller-manager-dcd9cb8d6-gtl5p\" (UID: \"0aefccb3-bec0-40ca-bd49-5d18b5df30fa\") " pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.193986 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0aefccb3-bec0-40ca-bd49-5d18b5df30fa-proxy-ca-bundles\") pod \"controller-manager-dcd9cb8d6-gtl5p\" (UID: \"0aefccb3-bec0-40ca-bd49-5d18b5df30fa\") " pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.198650 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aefccb3-bec0-40ca-bd49-5d18b5df30fa-serving-cert\") pod \"controller-manager-dcd9cb8d6-gtl5p\" (UID: \"0aefccb3-bec0-40ca-bd49-5d18b5df30fa\") " pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.211051 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8rw8\" (UniqueName: \"kubernetes.io/projected/0aefccb3-bec0-40ca-bd49-5d18b5df30fa-kube-api-access-n8rw8\") pod \"controller-manager-dcd9cb8d6-gtl5p\" (UID: \"0aefccb3-bec0-40ca-bd49-5d18b5df30fa\") " pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.249701 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5cz87"] Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.293934 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-serving-cert\") pod \"route-controller-manager-57667fdb67-5wktz\" (UID: \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.294009 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82lr6\" (UniqueName: \"kubernetes.io/projected/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-kube-api-access-82lr6\") pod \"route-controller-manager-57667fdb67-5wktz\" (UID: \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.294039 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-client-ca\") pod \"route-controller-manager-57667fdb67-5wktz\" (UID: \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.294087 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-config\") pod \"route-controller-manager-57667fdb67-5wktz\" (UID: \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.295309 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-client-ca\") pod \"route-controller-manager-57667fdb67-5wktz\" (UID: \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.295574 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-config\") pod \"route-controller-manager-57667fdb67-5wktz\" (UID: \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.308656 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-serving-cert\") pod \"route-controller-manager-57667fdb67-5wktz\" (UID: \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.310573 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82lr6\" (UniqueName: \"kubernetes.io/projected/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-kube-api-access-82lr6\") pod \"route-controller-manager-57667fdb67-5wktz\" (UID: \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.312145 4718 generic.go:334] "Generic (PLEG): container finished" podID="be7521bc-a519-439d-9ae3-4fb10368e494" containerID="986b796061db417c4c77de7c3457d2481dd84c0101ff9316c2f9e6f9ed8e2677" exitCode=0 Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.312292 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" event={"ID":"be7521bc-a519-439d-9ae3-4fb10368e494","Type":"ContainerDied","Data":"986b796061db417c4c77de7c3457d2481dd84c0101ff9316c2f9e6f9ed8e2677"} Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.312402 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" event={"ID":"be7521bc-a519-439d-9ae3-4fb10368e494","Type":"ContainerDied","Data":"cfe05ced058dad575d0269f7deb34142da9ef13dca740dbc1bdb3f1a6c5f4854"} Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.312487 4718 scope.go:117] "RemoveContainer" containerID="986b796061db417c4c77de7c3457d2481dd84c0101ff9316c2f9e6f9ed8e2677" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.312701 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.315988 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" event={"ID":"b2c947b7-81d9-4041-9b78-668f44427eb9","Type":"ContainerDied","Data":"13338b77998472746122d527c1d16e5d56bf82795285e99322ee47559a6e6a03"} Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.316051 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bvjhk" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.324115 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.324606 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cz87" event={"ID":"1b24d924-160f-42a8-a416-54d63a814db4","Type":"ContainerStarted","Data":"6645d63dc3a8052e6aef2f2a6d343be7dac994e19793ebc0e0fca48ed60c701d"} Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.326908 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.385011 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf"] Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.385316 4718 scope.go:117] "RemoveContainer" containerID="986b796061db417c4c77de7c3457d2481dd84c0101ff9316c2f9e6f9ed8e2677" Jan 23 16:21:37 crc kubenswrapper[4718]: E0123 16:21:37.385860 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"986b796061db417c4c77de7c3457d2481dd84c0101ff9316c2f9e6f9ed8e2677\": container with ID starting with 986b796061db417c4c77de7c3457d2481dd84c0101ff9316c2f9e6f9ed8e2677 not found: ID does not exist" containerID="986b796061db417c4c77de7c3457d2481dd84c0101ff9316c2f9e6f9ed8e2677" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.385968 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"986b796061db417c4c77de7c3457d2481dd84c0101ff9316c2f9e6f9ed8e2677"} err="failed to get container status \"986b796061db417c4c77de7c3457d2481dd84c0101ff9316c2f9e6f9ed8e2677\": rpc error: code = NotFound desc = could not find container \"986b796061db417c4c77de7c3457d2481dd84c0101ff9316c2f9e6f9ed8e2677\": container with ID starting with 986b796061db417c4c77de7c3457d2481dd84c0101ff9316c2f9e6f9ed8e2677 not found: ID does not exist" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.386093 4718 scope.go:117] "RemoveContainer" containerID="0aaf098ba2d1be22e1b7c2aeb8663fc00fe645ebe0d5c2f5c30d1a5a7167b976" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.395947 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-42bjf"] Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.431345 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2rjh"] Jan 23 16:21:37 crc kubenswrapper[4718]: W0123 16:21:37.437230 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bec1314_07f0_4c53_bad8_5eeb60b833a3.slice/crio-b7fb9469a4eddaebe5db5bc72cc05d029411a2f6191a36ae6e988cb8d0048d8b WatchSource:0}: Error finding container b7fb9469a4eddaebe5db5bc72cc05d029411a2f6191a36ae6e988cb8d0048d8b: Status 404 returned error can't find the container with id b7fb9469a4eddaebe5db5bc72cc05d029411a2f6191a36ae6e988cb8d0048d8b Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.440582 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.457293 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bvjhk"] Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.462716 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bvjhk"] Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.496460 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-client-ca\") pod \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\" (UID: \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\") " Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.496566 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-config\") pod \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\" (UID: \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\") " Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.496621 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-serving-cert\") pod \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\" (UID: \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\") " Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.496800 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82lr6\" (UniqueName: \"kubernetes.io/projected/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-kube-api-access-82lr6\") pod \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\" (UID: \"6937bd73-3dcf-46cc-a634-ebbd6a23a4e9\") " Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.499315 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-client-ca" (OuterVolumeSpecName: "client-ca") pod "6937bd73-3dcf-46cc-a634-ebbd6a23a4e9" (UID: "6937bd73-3dcf-46cc-a634-ebbd6a23a4e9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.499408 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-config" (OuterVolumeSpecName: "config") pod "6937bd73-3dcf-46cc-a634-ebbd6a23a4e9" (UID: "6937bd73-3dcf-46cc-a634-ebbd6a23a4e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.503861 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6937bd73-3dcf-46cc-a634-ebbd6a23a4e9" (UID: "6937bd73-3dcf-46cc-a634-ebbd6a23a4e9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.504385 4718 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.504423 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.504437 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.508514 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-kube-api-access-82lr6" (OuterVolumeSpecName: "kube-api-access-82lr6") pod "6937bd73-3dcf-46cc-a634-ebbd6a23a4e9" (UID: "6937bd73-3dcf-46cc-a634-ebbd6a23a4e9"). InnerVolumeSpecName "kube-api-access-82lr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.547489 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p"] Jan 23 16:21:37 crc kubenswrapper[4718]: W0123 16:21:37.557550 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0aefccb3_bec0_40ca_bd49_5d18b5df30fa.slice/crio-463ee7966ba5023be63287de61372a46efd897b59d463091931ee1b127b6d609 WatchSource:0}: Error finding container 463ee7966ba5023be63287de61372a46efd897b59d463091931ee1b127b6d609: Status 404 returned error can't find the container with id 463ee7966ba5023be63287de61372a46efd897b59d463091931ee1b127b6d609 Jan 23 16:21:37 crc kubenswrapper[4718]: I0123 16:21:37.606125 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82lr6\" (UniqueName: \"kubernetes.io/projected/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9-kube-api-access-82lr6\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.335576 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" event={"ID":"0aefccb3-bec0-40ca-bd49-5d18b5df30fa","Type":"ContainerStarted","Data":"d320cf7b7d50cd823676899ce0b1f9f9b436a4e252280c5a11d04c3f68cc3796"} Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.335664 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" event={"ID":"0aefccb3-bec0-40ca-bd49-5d18b5df30fa","Type":"ContainerStarted","Data":"463ee7966ba5023be63287de61372a46efd897b59d463091931ee1b127b6d609"} Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.336040 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.337282 4718 generic.go:334] "Generic (PLEG): container finished" podID="1b24d924-160f-42a8-a416-54d63a814db4" containerID="b8cc7436f0f6df1b416a761d350d4d69144230ab40e782642ae6173633dd86ac" exitCode=0 Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.337316 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cz87" event={"ID":"1b24d924-160f-42a8-a416-54d63a814db4","Type":"ContainerDied","Data":"b8cc7436f0f6df1b416a761d350d4d69144230ab40e782642ae6173633dd86ac"} Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.341568 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.342133 4718 generic.go:334] "Generic (PLEG): container finished" podID="2bec1314-07f0-4c53-bad8-5eeb60b833a3" containerID="4a62fecbe3b284695974dc76cc96cba6660de4c12a785a7f02046207bc7bed09" exitCode=0 Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.342205 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.342284 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2rjh" event={"ID":"2bec1314-07f0-4c53-bad8-5eeb60b833a3","Type":"ContainerDied","Data":"4a62fecbe3b284695974dc76cc96cba6660de4c12a785a7f02046207bc7bed09"} Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.342361 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2rjh" event={"ID":"2bec1314-07f0-4c53-bad8-5eeb60b833a3","Type":"ContainerStarted","Data":"b7fb9469a4eddaebe5db5bc72cc05d029411a2f6191a36ae6e988cb8d0048d8b"} Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.365853 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" podStartSLOduration=2.365834766 podStartE2EDuration="2.365834766s" podCreationTimestamp="2026-01-23 16:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:21:38.36219817 +0000 UTC m=+299.509440171" watchObservedRunningTime="2026-01-23 16:21:38.365834766 +0000 UTC m=+299.513076767" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.443314 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm"] Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.444425 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.459377 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.460147 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.460962 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.463453 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.464284 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.475536 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz"] Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.493222 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57667fdb67-5wktz"] Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.498396 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.499613 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm"] Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.528702 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/362beb53-35c5-471b-9af1-ea5f9492b776-client-ca\") pod \"route-controller-manager-f8c564845-zb6jm\" (UID: \"362beb53-35c5-471b-9af1-ea5f9492b776\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.528771 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/362beb53-35c5-471b-9af1-ea5f9492b776-serving-cert\") pod \"route-controller-manager-f8c564845-zb6jm\" (UID: \"362beb53-35c5-471b-9af1-ea5f9492b776\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.528827 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74n6v\" (UniqueName: \"kubernetes.io/projected/362beb53-35c5-471b-9af1-ea5f9492b776-kube-api-access-74n6v\") pod \"route-controller-manager-f8c564845-zb6jm\" (UID: \"362beb53-35c5-471b-9af1-ea5f9492b776\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.528889 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/362beb53-35c5-471b-9af1-ea5f9492b776-config\") pod \"route-controller-manager-f8c564845-zb6jm\" (UID: \"362beb53-35c5-471b-9af1-ea5f9492b776\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.630603 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74n6v\" (UniqueName: \"kubernetes.io/projected/362beb53-35c5-471b-9af1-ea5f9492b776-kube-api-access-74n6v\") pod \"route-controller-manager-f8c564845-zb6jm\" (UID: \"362beb53-35c5-471b-9af1-ea5f9492b776\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.630722 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/362beb53-35c5-471b-9af1-ea5f9492b776-config\") pod \"route-controller-manager-f8c564845-zb6jm\" (UID: \"362beb53-35c5-471b-9af1-ea5f9492b776\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.630803 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/362beb53-35c5-471b-9af1-ea5f9492b776-client-ca\") pod \"route-controller-manager-f8c564845-zb6jm\" (UID: \"362beb53-35c5-471b-9af1-ea5f9492b776\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.630846 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/362beb53-35c5-471b-9af1-ea5f9492b776-serving-cert\") pod \"route-controller-manager-f8c564845-zb6jm\" (UID: \"362beb53-35c5-471b-9af1-ea5f9492b776\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.634314 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/362beb53-35c5-471b-9af1-ea5f9492b776-client-ca\") pod \"route-controller-manager-f8c564845-zb6jm\" (UID: \"362beb53-35c5-471b-9af1-ea5f9492b776\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.634616 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/362beb53-35c5-471b-9af1-ea5f9492b776-config\") pod \"route-controller-manager-f8c564845-zb6jm\" (UID: \"362beb53-35c5-471b-9af1-ea5f9492b776\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.645922 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/362beb53-35c5-471b-9af1-ea5f9492b776-serving-cert\") pod \"route-controller-manager-f8c564845-zb6jm\" (UID: \"362beb53-35c5-471b-9af1-ea5f9492b776\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.654297 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74n6v\" (UniqueName: \"kubernetes.io/projected/362beb53-35c5-471b-9af1-ea5f9492b776-kube-api-access-74n6v\") pod \"route-controller-manager-f8c564845-zb6jm\" (UID: \"362beb53-35c5-471b-9af1-ea5f9492b776\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" Jan 23 16:21:38 crc kubenswrapper[4718]: I0123 16:21:38.792870 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.005577 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rfk5b"] Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.007907 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfk5b" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.020920 4718 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.021398 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.040877 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfk5b"] Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.054310 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm"] Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.139392 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8488e45-93b0-4584-af48-deee2924279f-utilities\") pod \"redhat-marketplace-rfk5b\" (UID: \"c8488e45-93b0-4584-af48-deee2924279f\") " pod="openshift-marketplace/redhat-marketplace-rfk5b" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.139441 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8488e45-93b0-4584-af48-deee2924279f-catalog-content\") pod \"redhat-marketplace-rfk5b\" (UID: \"c8488e45-93b0-4584-af48-deee2924279f\") " pod="openshift-marketplace/redhat-marketplace-rfk5b" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.139541 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74pbq\" (UniqueName: \"kubernetes.io/projected/c8488e45-93b0-4584-af48-deee2924279f-kube-api-access-74pbq\") pod \"redhat-marketplace-rfk5b\" (UID: \"c8488e45-93b0-4584-af48-deee2924279f\") " pod="openshift-marketplace/redhat-marketplace-rfk5b" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.156600 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6937bd73-3dcf-46cc-a634-ebbd6a23a4e9" path="/var/lib/kubelet/pods/6937bd73-3dcf-46cc-a634-ebbd6a23a4e9/volumes" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.157066 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c947b7-81d9-4041-9b78-668f44427eb9" path="/var/lib/kubelet/pods/b2c947b7-81d9-4041-9b78-668f44427eb9/volumes" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.157943 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be7521bc-a519-439d-9ae3-4fb10368e494" path="/var/lib/kubelet/pods/be7521bc-a519-439d-9ae3-4fb10368e494/volumes" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.205078 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5vb52"] Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.206891 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5vb52" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.212242 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.221591 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5vb52"] Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.240754 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74pbq\" (UniqueName: \"kubernetes.io/projected/c8488e45-93b0-4584-af48-deee2924279f-kube-api-access-74pbq\") pod \"redhat-marketplace-rfk5b\" (UID: \"c8488e45-93b0-4584-af48-deee2924279f\") " pod="openshift-marketplace/redhat-marketplace-rfk5b" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.240794 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8488e45-93b0-4584-af48-deee2924279f-utilities\") pod \"redhat-marketplace-rfk5b\" (UID: \"c8488e45-93b0-4584-af48-deee2924279f\") " pod="openshift-marketplace/redhat-marketplace-rfk5b" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.240813 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8488e45-93b0-4584-af48-deee2924279f-catalog-content\") pod \"redhat-marketplace-rfk5b\" (UID: \"c8488e45-93b0-4584-af48-deee2924279f\") " pod="openshift-marketplace/redhat-marketplace-rfk5b" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.241869 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8488e45-93b0-4584-af48-deee2924279f-catalog-content\") pod \"redhat-marketplace-rfk5b\" (UID: \"c8488e45-93b0-4584-af48-deee2924279f\") " pod="openshift-marketplace/redhat-marketplace-rfk5b" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.241928 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8488e45-93b0-4584-af48-deee2924279f-utilities\") pod \"redhat-marketplace-rfk5b\" (UID: \"c8488e45-93b0-4584-af48-deee2924279f\") " pod="openshift-marketplace/redhat-marketplace-rfk5b" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.263965 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74pbq\" (UniqueName: \"kubernetes.io/projected/c8488e45-93b0-4584-af48-deee2924279f-kube-api-access-74pbq\") pod \"redhat-marketplace-rfk5b\" (UID: \"c8488e45-93b0-4584-af48-deee2924279f\") " pod="openshift-marketplace/redhat-marketplace-rfk5b" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.341522 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a97ed77-1e1b-447a-9a88-6d43f803f9d9-catalog-content\") pod \"redhat-operators-5vb52\" (UID: \"3a97ed77-1e1b-447a-9a88-6d43f803f9d9\") " pod="openshift-marketplace/redhat-operators-5vb52" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.342000 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a97ed77-1e1b-447a-9a88-6d43f803f9d9-utilities\") pod \"redhat-operators-5vb52\" (UID: \"3a97ed77-1e1b-447a-9a88-6d43f803f9d9\") " pod="openshift-marketplace/redhat-operators-5vb52" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.342032 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psv8j\" (UniqueName: \"kubernetes.io/projected/3a97ed77-1e1b-447a-9a88-6d43f803f9d9-kube-api-access-psv8j\") pod \"redhat-operators-5vb52\" (UID: \"3a97ed77-1e1b-447a-9a88-6d43f803f9d9\") " pod="openshift-marketplace/redhat-operators-5vb52" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.349939 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.351781 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" event={"ID":"362beb53-35c5-471b-9af1-ea5f9492b776","Type":"ContainerStarted","Data":"8d843c2a2a25c7229dabaa941ae950037ed9e12dad87310617e14f42e3a0c166"} Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.351831 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" event={"ID":"362beb53-35c5-471b-9af1-ea5f9492b776","Type":"ContainerStarted","Data":"929d61d29ce8e1e498c99bac753b0cc4f5920b263f6a4f666829d882fc552a78"} Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.352068 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.354545 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfk5b" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.355539 4718 patch_prober.go:28] interesting pod/route-controller-manager-f8c564845-zb6jm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.355587 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" podUID="362beb53-35c5-471b-9af1-ea5f9492b776" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.357179 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2rjh" event={"ID":"2bec1314-07f0-4c53-bad8-5eeb60b833a3","Type":"ContainerStarted","Data":"7c851e55f896c0154a11dd0099b8ade26cb3e10da229d67ac66d5f0da2f6670c"} Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.361078 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cz87" event={"ID":"1b24d924-160f-42a8-a416-54d63a814db4","Type":"ContainerStarted","Data":"9b50948d2400b9364d2d5853578db4b52537d85e47045765da022249d15f8d49"} Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.374574 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" podStartSLOduration=2.374552756 podStartE2EDuration="2.374552756s" podCreationTimestamp="2026-01-23 16:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:21:39.370673694 +0000 UTC m=+300.517915695" watchObservedRunningTime="2026-01-23 16:21:39.374552756 +0000 UTC m=+300.521794747" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.446299 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a97ed77-1e1b-447a-9a88-6d43f803f9d9-utilities\") pod \"redhat-operators-5vb52\" (UID: \"3a97ed77-1e1b-447a-9a88-6d43f803f9d9\") " pod="openshift-marketplace/redhat-operators-5vb52" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.446499 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psv8j\" (UniqueName: \"kubernetes.io/projected/3a97ed77-1e1b-447a-9a88-6d43f803f9d9-kube-api-access-psv8j\") pod \"redhat-operators-5vb52\" (UID: \"3a97ed77-1e1b-447a-9a88-6d43f803f9d9\") " pod="openshift-marketplace/redhat-operators-5vb52" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.446686 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a97ed77-1e1b-447a-9a88-6d43f803f9d9-catalog-content\") pod \"redhat-operators-5vb52\" (UID: \"3a97ed77-1e1b-447a-9a88-6d43f803f9d9\") " pod="openshift-marketplace/redhat-operators-5vb52" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.448276 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a97ed77-1e1b-447a-9a88-6d43f803f9d9-utilities\") pod \"redhat-operators-5vb52\" (UID: \"3a97ed77-1e1b-447a-9a88-6d43f803f9d9\") " pod="openshift-marketplace/redhat-operators-5vb52" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.448826 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a97ed77-1e1b-447a-9a88-6d43f803f9d9-catalog-content\") pod \"redhat-operators-5vb52\" (UID: \"3a97ed77-1e1b-447a-9a88-6d43f803f9d9\") " pod="openshift-marketplace/redhat-operators-5vb52" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.480526 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psv8j\" (UniqueName: \"kubernetes.io/projected/3a97ed77-1e1b-447a-9a88-6d43f803f9d9-kube-api-access-psv8j\") pod \"redhat-operators-5vb52\" (UID: \"3a97ed77-1e1b-447a-9a88-6d43f803f9d9\") " pod="openshift-marketplace/redhat-operators-5vb52" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.524114 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5vb52" Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.621762 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfk5b"] Jan 23 16:21:39 crc kubenswrapper[4718]: W0123 16:21:39.665329 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8488e45_93b0_4584_af48_deee2924279f.slice/crio-74e5cbadb582acec5c9bac08d2c959e671b2a4cf8d9db08ad84fcb706f826fff WatchSource:0}: Error finding container 74e5cbadb582acec5c9bac08d2c959e671b2a4cf8d9db08ad84fcb706f826fff: Status 404 returned error can't find the container with id 74e5cbadb582acec5c9bac08d2c959e671b2a4cf8d9db08ad84fcb706f826fff Jan 23 16:21:39 crc kubenswrapper[4718]: I0123 16:21:39.782508 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5vb52"] Jan 23 16:21:39 crc kubenswrapper[4718]: W0123 16:21:39.789512 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a97ed77_1e1b_447a_9a88_6d43f803f9d9.slice/crio-8bce115e52575360a3852cad950d046f5947a287d1304dd349c6776186640631 WatchSource:0}: Error finding container 8bce115e52575360a3852cad950d046f5947a287d1304dd349c6776186640631: Status 404 returned error can't find the container with id 8bce115e52575360a3852cad950d046f5947a287d1304dd349c6776186640631 Jan 23 16:21:40 crc kubenswrapper[4718]: I0123 16:21:40.368086 4718 generic.go:334] "Generic (PLEG): container finished" podID="c8488e45-93b0-4584-af48-deee2924279f" containerID="24aaebbdc1a63a55c26e2fd97b2b8afb194791da6e549f6047f19d672c52363b" exitCode=0 Jan 23 16:21:40 crc kubenswrapper[4718]: I0123 16:21:40.368161 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfk5b" event={"ID":"c8488e45-93b0-4584-af48-deee2924279f","Type":"ContainerDied","Data":"24aaebbdc1a63a55c26e2fd97b2b8afb194791da6e549f6047f19d672c52363b"} Jan 23 16:21:40 crc kubenswrapper[4718]: I0123 16:21:40.368612 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfk5b" event={"ID":"c8488e45-93b0-4584-af48-deee2924279f","Type":"ContainerStarted","Data":"74e5cbadb582acec5c9bac08d2c959e671b2a4cf8d9db08ad84fcb706f826fff"} Jan 23 16:21:40 crc kubenswrapper[4718]: I0123 16:21:40.372513 4718 generic.go:334] "Generic (PLEG): container finished" podID="1b24d924-160f-42a8-a416-54d63a814db4" containerID="9b50948d2400b9364d2d5853578db4b52537d85e47045765da022249d15f8d49" exitCode=0 Jan 23 16:21:40 crc kubenswrapper[4718]: I0123 16:21:40.372576 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cz87" event={"ID":"1b24d924-160f-42a8-a416-54d63a814db4","Type":"ContainerDied","Data":"9b50948d2400b9364d2d5853578db4b52537d85e47045765da022249d15f8d49"} Jan 23 16:21:40 crc kubenswrapper[4718]: I0123 16:21:40.377939 4718 generic.go:334] "Generic (PLEG): container finished" podID="3a97ed77-1e1b-447a-9a88-6d43f803f9d9" containerID="12ec5373b7135d3d942ce8f9742d1b24566b9f95f756303712872eb2dc487685" exitCode=0 Jan 23 16:21:40 crc kubenswrapper[4718]: I0123 16:21:40.378032 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vb52" event={"ID":"3a97ed77-1e1b-447a-9a88-6d43f803f9d9","Type":"ContainerDied","Data":"12ec5373b7135d3d942ce8f9742d1b24566b9f95f756303712872eb2dc487685"} Jan 23 16:21:40 crc kubenswrapper[4718]: I0123 16:21:40.378057 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vb52" event={"ID":"3a97ed77-1e1b-447a-9a88-6d43f803f9d9","Type":"ContainerStarted","Data":"8bce115e52575360a3852cad950d046f5947a287d1304dd349c6776186640631"} Jan 23 16:21:40 crc kubenswrapper[4718]: I0123 16:21:40.384645 4718 generic.go:334] "Generic (PLEG): container finished" podID="2bec1314-07f0-4c53-bad8-5eeb60b833a3" containerID="7c851e55f896c0154a11dd0099b8ade26cb3e10da229d67ac66d5f0da2f6670c" exitCode=0 Jan 23 16:21:40 crc kubenswrapper[4718]: I0123 16:21:40.384749 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2rjh" event={"ID":"2bec1314-07f0-4c53-bad8-5eeb60b833a3","Type":"ContainerDied","Data":"7c851e55f896c0154a11dd0099b8ade26cb3e10da229d67ac66d5f0da2f6670c"} Jan 23 16:21:40 crc kubenswrapper[4718]: I0123 16:21:40.392385 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" Jan 23 16:21:41 crc kubenswrapper[4718]: I0123 16:21:41.394407 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cz87" event={"ID":"1b24d924-160f-42a8-a416-54d63a814db4","Type":"ContainerStarted","Data":"f9bcdb29d9e43e960cc0b70bbf45fbc988e8ff533552283892599a90177b372d"} Jan 23 16:21:41 crc kubenswrapper[4718]: I0123 16:21:41.397553 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vb52" event={"ID":"3a97ed77-1e1b-447a-9a88-6d43f803f9d9","Type":"ContainerStarted","Data":"9e16cfcff76ae07688ac587867673c02354ca11b908e894435b0785d963d733f"} Jan 23 16:21:41 crc kubenswrapper[4718]: I0123 16:21:41.400991 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2rjh" event={"ID":"2bec1314-07f0-4c53-bad8-5eeb60b833a3","Type":"ContainerStarted","Data":"d041e83e5be5f0589ac689d54209cb400c23bb41dea8e403d19e37eb57e2ed50"} Jan 23 16:21:41 crc kubenswrapper[4718]: I0123 16:21:41.403075 4718 generic.go:334] "Generic (PLEG): container finished" podID="c8488e45-93b0-4584-af48-deee2924279f" containerID="dacac77cadbca8342b1f3d1f63c617233ef31c32a383e372c65c584b7ced2405" exitCode=0 Jan 23 16:21:41 crc kubenswrapper[4718]: I0123 16:21:41.403746 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfk5b" event={"ID":"c8488e45-93b0-4584-af48-deee2924279f","Type":"ContainerDied","Data":"dacac77cadbca8342b1f3d1f63c617233ef31c32a383e372c65c584b7ced2405"} Jan 23 16:21:41 crc kubenswrapper[4718]: I0123 16:21:41.413647 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5cz87" podStartSLOduration=2.96774484 podStartE2EDuration="5.413614758s" podCreationTimestamp="2026-01-23 16:21:36 +0000 UTC" firstStartedPulling="2026-01-23 16:21:38.340475496 +0000 UTC m=+299.487717497" lastFinishedPulling="2026-01-23 16:21:40.786345424 +0000 UTC m=+301.933587415" observedRunningTime="2026-01-23 16:21:41.410426084 +0000 UTC m=+302.557668085" watchObservedRunningTime="2026-01-23 16:21:41.413614758 +0000 UTC m=+302.560856749" Jan 23 16:21:41 crc kubenswrapper[4718]: I0123 16:21:41.467079 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d2rjh" podStartSLOduration=3.013720315 podStartE2EDuration="5.467050651s" podCreationTimestamp="2026-01-23 16:21:36 +0000 UTC" firstStartedPulling="2026-01-23 16:21:38.344680476 +0000 UTC m=+299.491922467" lastFinishedPulling="2026-01-23 16:21:40.798010782 +0000 UTC m=+301.945252803" observedRunningTime="2026-01-23 16:21:41.466429385 +0000 UTC m=+302.613671366" watchObservedRunningTime="2026-01-23 16:21:41.467050651 +0000 UTC m=+302.614292662" Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.072524 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-7nblg"] Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.073954 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7nblg" Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.076958 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.077017 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.077366 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.077801 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.078225 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.083703 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-7nblg"] Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.193987 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8b903dfe-90d9-43e1-acec-91ee4ca0cfd5-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-7nblg\" (UID: \"8b903dfe-90d9-43e1-acec-91ee4ca0cfd5\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7nblg" Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.195517 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b903dfe-90d9-43e1-acec-91ee4ca0cfd5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-7nblg\" (UID: \"8b903dfe-90d9-43e1-acec-91ee4ca0cfd5\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7nblg" Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.195617 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fpc9\" (UniqueName: \"kubernetes.io/projected/8b903dfe-90d9-43e1-acec-91ee4ca0cfd5-kube-api-access-2fpc9\") pod \"cluster-monitoring-operator-6d5b84845-7nblg\" (UID: \"8b903dfe-90d9-43e1-acec-91ee4ca0cfd5\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7nblg" Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.298035 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8b903dfe-90d9-43e1-acec-91ee4ca0cfd5-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-7nblg\" (UID: \"8b903dfe-90d9-43e1-acec-91ee4ca0cfd5\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7nblg" Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.298187 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b903dfe-90d9-43e1-acec-91ee4ca0cfd5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-7nblg\" (UID: \"8b903dfe-90d9-43e1-acec-91ee4ca0cfd5\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7nblg" Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.299394 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fpc9\" (UniqueName: \"kubernetes.io/projected/8b903dfe-90d9-43e1-acec-91ee4ca0cfd5-kube-api-access-2fpc9\") pod \"cluster-monitoring-operator-6d5b84845-7nblg\" (UID: \"8b903dfe-90d9-43e1-acec-91ee4ca0cfd5\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7nblg" Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.300224 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8b903dfe-90d9-43e1-acec-91ee4ca0cfd5-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-7nblg\" (UID: \"8b903dfe-90d9-43e1-acec-91ee4ca0cfd5\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7nblg" Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.305774 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b903dfe-90d9-43e1-acec-91ee4ca0cfd5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-7nblg\" (UID: \"8b903dfe-90d9-43e1-acec-91ee4ca0cfd5\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7nblg" Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.333619 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fpc9\" (UniqueName: \"kubernetes.io/projected/8b903dfe-90d9-43e1-acec-91ee4ca0cfd5-kube-api-access-2fpc9\") pod \"cluster-monitoring-operator-6d5b84845-7nblg\" (UID: \"8b903dfe-90d9-43e1-acec-91ee4ca0cfd5\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7nblg" Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.393145 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7nblg" Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.426547 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfk5b" event={"ID":"c8488e45-93b0-4584-af48-deee2924279f","Type":"ContainerStarted","Data":"c9b1f339c278397384159aecee6e4bdd3155325a7453d0a0389e71b0795282dd"} Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.430320 4718 generic.go:334] "Generic (PLEG): container finished" podID="3a97ed77-1e1b-447a-9a88-6d43f803f9d9" containerID="9e16cfcff76ae07688ac587867673c02354ca11b908e894435b0785d963d733f" exitCode=0 Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.430413 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vb52" event={"ID":"3a97ed77-1e1b-447a-9a88-6d43f803f9d9","Type":"ContainerDied","Data":"9e16cfcff76ae07688ac587867673c02354ca11b908e894435b0785d963d733f"} Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.467795 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rfk5b" podStartSLOduration=2.980183098 podStartE2EDuration="4.467766009s" podCreationTimestamp="2026-01-23 16:21:38 +0000 UTC" firstStartedPulling="2026-01-23 16:21:40.371117045 +0000 UTC m=+301.518359036" lastFinishedPulling="2026-01-23 16:21:41.858699956 +0000 UTC m=+303.005941947" observedRunningTime="2026-01-23 16:21:42.452809784 +0000 UTC m=+303.600051795" watchObservedRunningTime="2026-01-23 16:21:42.467766009 +0000 UTC m=+303.615008000" Jan 23 16:21:42 crc kubenswrapper[4718]: I0123 16:21:42.875777 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-7nblg"] Jan 23 16:21:42 crc kubenswrapper[4718]: W0123 16:21:42.885235 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b903dfe_90d9_43e1_acec_91ee4ca0cfd5.slice/crio-deea609695844aa4add7b88cedff71d454329e411d45ee18a8fbdc03b3b87b0e WatchSource:0}: Error finding container deea609695844aa4add7b88cedff71d454329e411d45ee18a8fbdc03b3b87b0e: Status 404 returned error can't find the container with id deea609695844aa4add7b88cedff71d454329e411d45ee18a8fbdc03b3b87b0e Jan 23 16:21:43 crc kubenswrapper[4718]: I0123 16:21:43.438367 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7nblg" event={"ID":"8b903dfe-90d9-43e1-acec-91ee4ca0cfd5","Type":"ContainerStarted","Data":"deea609695844aa4add7b88cedff71d454329e411d45ee18a8fbdc03b3b87b0e"} Jan 23 16:21:43 crc kubenswrapper[4718]: I0123 16:21:43.440477 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vb52" event={"ID":"3a97ed77-1e1b-447a-9a88-6d43f803f9d9","Type":"ContainerStarted","Data":"d2ffafa8c7e57493b190265e5b43ec89a7d516245b303a2aa61ea228d124871c"} Jan 23 16:21:43 crc kubenswrapper[4718]: I0123 16:21:43.470440 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5vb52" podStartSLOduration=1.894562714 podStartE2EDuration="4.470421039s" podCreationTimestamp="2026-01-23 16:21:39 +0000 UTC" firstStartedPulling="2026-01-23 16:21:40.379981569 +0000 UTC m=+301.527223590" lastFinishedPulling="2026-01-23 16:21:42.955839914 +0000 UTC m=+304.103081915" observedRunningTime="2026-01-23 16:21:43.466779783 +0000 UTC m=+304.614021784" watchObservedRunningTime="2026-01-23 16:21:43.470421039 +0000 UTC m=+304.617663020" Jan 23 16:21:46 crc kubenswrapper[4718]: I0123 16:21:46.474913 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7nblg" event={"ID":"8b903dfe-90d9-43e1-acec-91ee4ca0cfd5","Type":"ContainerStarted","Data":"85ee6827d3f89a0f33932dd5b13ebc32e685aa80b516816ee0044107151af12a"} Jan 23 16:21:46 crc kubenswrapper[4718]: I0123 16:21:46.497401 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7nblg" podStartSLOduration=1.230433494 podStartE2EDuration="4.497375961s" podCreationTimestamp="2026-01-23 16:21:42 +0000 UTC" firstStartedPulling="2026-01-23 16:21:42.890078825 +0000 UTC m=+304.037320816" lastFinishedPulling="2026-01-23 16:21:46.157021282 +0000 UTC m=+307.304263283" observedRunningTime="2026-01-23 16:21:46.492487431 +0000 UTC m=+307.639729422" watchObservedRunningTime="2026-01-23 16:21:46.497375961 +0000 UTC m=+307.644617962" Jan 23 16:21:46 crc kubenswrapper[4718]: I0123 16:21:46.754873 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5cz87" Jan 23 16:21:46 crc kubenswrapper[4718]: I0123 16:21:46.760901 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5cz87" Jan 23 16:21:46 crc kubenswrapper[4718]: I0123 16:21:46.795960 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ptn49"] Jan 23 16:21:46 crc kubenswrapper[4718]: I0123 16:21:46.796607 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ptn49" Jan 23 16:21:46 crc kubenswrapper[4718]: I0123 16:21:46.800893 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Jan 23 16:21:46 crc kubenswrapper[4718]: I0123 16:21:46.804123 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-zr6wl" Jan 23 16:21:46 crc kubenswrapper[4718]: I0123 16:21:46.829348 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5cz87" Jan 23 16:21:46 crc kubenswrapper[4718]: I0123 16:21:46.855212 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ptn49"] Jan 23 16:21:46 crc kubenswrapper[4718]: I0123 16:21:46.876592 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/6723661c-ad90-4bdf-99b3-485cb9d2f5f1-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-ptn49\" (UID: \"6723661c-ad90-4bdf-99b3-485cb9d2f5f1\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ptn49" Jan 23 16:21:46 crc kubenswrapper[4718]: I0123 16:21:46.939232 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d2rjh" Jan 23 16:21:46 crc kubenswrapper[4718]: I0123 16:21:46.940773 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d2rjh" Jan 23 16:21:46 crc kubenswrapper[4718]: I0123 16:21:46.977982 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/6723661c-ad90-4bdf-99b3-485cb9d2f5f1-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-ptn49\" (UID: \"6723661c-ad90-4bdf-99b3-485cb9d2f5f1\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ptn49" Jan 23 16:21:46 crc kubenswrapper[4718]: E0123 16:21:46.978275 4718 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Jan 23 16:21:46 crc kubenswrapper[4718]: E0123 16:21:46.978434 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6723661c-ad90-4bdf-99b3-485cb9d2f5f1-tls-certificates podName:6723661c-ad90-4bdf-99b3-485cb9d2f5f1 nodeName:}" failed. No retries permitted until 2026-01-23 16:21:47.478400678 +0000 UTC m=+308.625642679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/6723661c-ad90-4bdf-99b3-485cb9d2f5f1-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-ptn49" (UID: "6723661c-ad90-4bdf-99b3-485cb9d2f5f1") : secret "prometheus-operator-admission-webhook-tls" not found Jan 23 16:21:46 crc kubenswrapper[4718]: I0123 16:21:46.989280 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d2rjh" Jan 23 16:21:47 crc kubenswrapper[4718]: I0123 16:21:47.485542 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/6723661c-ad90-4bdf-99b3-485cb9d2f5f1-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-ptn49\" (UID: \"6723661c-ad90-4bdf-99b3-485cb9d2f5f1\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ptn49" Jan 23 16:21:47 crc kubenswrapper[4718]: E0123 16:21:47.485726 4718 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Jan 23 16:21:47 crc kubenswrapper[4718]: E0123 16:21:47.485787 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6723661c-ad90-4bdf-99b3-485cb9d2f5f1-tls-certificates podName:6723661c-ad90-4bdf-99b3-485cb9d2f5f1 nodeName:}" failed. No retries permitted until 2026-01-23 16:21:48.485769264 +0000 UTC m=+309.633011255 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/6723661c-ad90-4bdf-99b3-485cb9d2f5f1-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-ptn49" (UID: "6723661c-ad90-4bdf-99b3-485cb9d2f5f1") : secret "prometheus-operator-admission-webhook-tls" not found Jan 23 16:21:47 crc kubenswrapper[4718]: I0123 16:21:47.535483 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5cz87" Jan 23 16:21:47 crc kubenswrapper[4718]: I0123 16:21:47.580197 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d2rjh" Jan 23 16:21:48 crc kubenswrapper[4718]: I0123 16:21:48.500602 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/6723661c-ad90-4bdf-99b3-485cb9d2f5f1-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-ptn49\" (UID: \"6723661c-ad90-4bdf-99b3-485cb9d2f5f1\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ptn49" Jan 23 16:21:48 crc kubenswrapper[4718]: I0123 16:21:48.508954 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/6723661c-ad90-4bdf-99b3-485cb9d2f5f1-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-ptn49\" (UID: \"6723661c-ad90-4bdf-99b3-485cb9d2f5f1\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ptn49" Jan 23 16:21:48 crc kubenswrapper[4718]: I0123 16:21:48.610525 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ptn49" Jan 23 16:21:49 crc kubenswrapper[4718]: I0123 16:21:49.042360 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ptn49"] Jan 23 16:21:49 crc kubenswrapper[4718]: W0123 16:21:49.054126 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6723661c_ad90_4bdf_99b3_485cb9d2f5f1.slice/crio-86458592093cfcfb4ef3370410177c8c5fbf8da14a8edc306865e5174d9af0b7 WatchSource:0}: Error finding container 86458592093cfcfb4ef3370410177c8c5fbf8da14a8edc306865e5174d9af0b7: Status 404 returned error can't find the container with id 86458592093cfcfb4ef3370410177c8c5fbf8da14a8edc306865e5174d9af0b7 Jan 23 16:21:49 crc kubenswrapper[4718]: I0123 16:21:49.355509 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rfk5b" Jan 23 16:21:49 crc kubenswrapper[4718]: I0123 16:21:49.355817 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rfk5b" Jan 23 16:21:49 crc kubenswrapper[4718]: I0123 16:21:49.399603 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rfk5b" Jan 23 16:21:49 crc kubenswrapper[4718]: I0123 16:21:49.494982 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ptn49" event={"ID":"6723661c-ad90-4bdf-99b3-485cb9d2f5f1","Type":"ContainerStarted","Data":"86458592093cfcfb4ef3370410177c8c5fbf8da14a8edc306865e5174d9af0b7"} Jan 23 16:21:49 crc kubenswrapper[4718]: I0123 16:21:49.525512 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5vb52" Jan 23 16:21:49 crc kubenswrapper[4718]: I0123 16:21:49.525607 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5vb52" Jan 23 16:21:49 crc kubenswrapper[4718]: I0123 16:21:49.543204 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rfk5b" Jan 23 16:21:49 crc kubenswrapper[4718]: I0123 16:21:49.574490 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5vb52" Jan 23 16:21:50 crc kubenswrapper[4718]: I0123 16:21:50.571200 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5vb52" Jan 23 16:21:53 crc kubenswrapper[4718]: I0123 16:21:53.517188 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ptn49" event={"ID":"6723661c-ad90-4bdf-99b3-485cb9d2f5f1","Type":"ContainerStarted","Data":"dbf9d8a4bf923205fcc02993d8d348ed679b26eb708794291a126427e9aedfdc"} Jan 23 16:21:53 crc kubenswrapper[4718]: I0123 16:21:53.518052 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ptn49" Jan 23 16:21:53 crc kubenswrapper[4718]: I0123 16:21:53.523457 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ptn49" Jan 23 16:21:53 crc kubenswrapper[4718]: I0123 16:21:53.541878 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ptn49" podStartSLOduration=3.785924047 podStartE2EDuration="7.541857293s" podCreationTimestamp="2026-01-23 16:21:46 +0000 UTC" firstStartedPulling="2026-01-23 16:21:49.057256462 +0000 UTC m=+310.204498453" lastFinishedPulling="2026-01-23 16:21:52.813189708 +0000 UTC m=+313.960431699" observedRunningTime="2026-01-23 16:21:53.53605547 +0000 UTC m=+314.683297461" watchObservedRunningTime="2026-01-23 16:21:53.541857293 +0000 UTC m=+314.689099284" Jan 23 16:21:53 crc kubenswrapper[4718]: I0123 16:21:53.861938 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-88bcb"] Jan 23 16:21:53 crc kubenswrapper[4718]: I0123 16:21:53.862950 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-88bcb" Jan 23 16:21:53 crc kubenswrapper[4718]: I0123 16:21:53.864749 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Jan 23 16:21:53 crc kubenswrapper[4718]: I0123 16:21:53.864763 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Jan 23 16:21:53 crc kubenswrapper[4718]: I0123 16:21:53.866419 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Jan 23 16:21:53 crc kubenswrapper[4718]: I0123 16:21:53.870024 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-g4smw" Jan 23 16:21:53 crc kubenswrapper[4718]: I0123 16:21:53.887424 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-88bcb"] Jan 23 16:21:54 crc kubenswrapper[4718]: I0123 16:21:54.013593 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb9906a3-196e-410d-b9ff-5fd05978af15-metrics-client-ca\") pod \"prometheus-operator-db54df47d-88bcb\" (UID: \"bb9906a3-196e-410d-b9ff-5fd05978af15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-88bcb" Jan 23 16:21:54 crc kubenswrapper[4718]: I0123 16:21:54.013668 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h6pc\" (UniqueName: \"kubernetes.io/projected/bb9906a3-196e-410d-b9ff-5fd05978af15-kube-api-access-8h6pc\") pod \"prometheus-operator-db54df47d-88bcb\" (UID: \"bb9906a3-196e-410d-b9ff-5fd05978af15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-88bcb" Jan 23 16:21:54 crc kubenswrapper[4718]: I0123 16:21:54.014002 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb9906a3-196e-410d-b9ff-5fd05978af15-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-88bcb\" (UID: \"bb9906a3-196e-410d-b9ff-5fd05978af15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-88bcb" Jan 23 16:21:54 crc kubenswrapper[4718]: I0123 16:21:54.014179 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bb9906a3-196e-410d-b9ff-5fd05978af15-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-88bcb\" (UID: \"bb9906a3-196e-410d-b9ff-5fd05978af15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-88bcb" Jan 23 16:21:54 crc kubenswrapper[4718]: I0123 16:21:54.116293 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb9906a3-196e-410d-b9ff-5fd05978af15-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-88bcb\" (UID: \"bb9906a3-196e-410d-b9ff-5fd05978af15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-88bcb" Jan 23 16:21:54 crc kubenswrapper[4718]: I0123 16:21:54.116392 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bb9906a3-196e-410d-b9ff-5fd05978af15-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-88bcb\" (UID: \"bb9906a3-196e-410d-b9ff-5fd05978af15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-88bcb" Jan 23 16:21:54 crc kubenswrapper[4718]: I0123 16:21:54.116434 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb9906a3-196e-410d-b9ff-5fd05978af15-metrics-client-ca\") pod \"prometheus-operator-db54df47d-88bcb\" (UID: \"bb9906a3-196e-410d-b9ff-5fd05978af15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-88bcb" Jan 23 16:21:54 crc kubenswrapper[4718]: I0123 16:21:54.116479 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h6pc\" (UniqueName: \"kubernetes.io/projected/bb9906a3-196e-410d-b9ff-5fd05978af15-kube-api-access-8h6pc\") pod \"prometheus-operator-db54df47d-88bcb\" (UID: \"bb9906a3-196e-410d-b9ff-5fd05978af15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-88bcb" Jan 23 16:21:54 crc kubenswrapper[4718]: I0123 16:21:54.117545 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb9906a3-196e-410d-b9ff-5fd05978af15-metrics-client-ca\") pod \"prometheus-operator-db54df47d-88bcb\" (UID: \"bb9906a3-196e-410d-b9ff-5fd05978af15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-88bcb" Jan 23 16:21:54 crc kubenswrapper[4718]: I0123 16:21:54.126314 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb9906a3-196e-410d-b9ff-5fd05978af15-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-88bcb\" (UID: \"bb9906a3-196e-410d-b9ff-5fd05978af15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-88bcb" Jan 23 16:21:54 crc kubenswrapper[4718]: I0123 16:21:54.130933 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bb9906a3-196e-410d-b9ff-5fd05978af15-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-88bcb\" (UID: \"bb9906a3-196e-410d-b9ff-5fd05978af15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-88bcb" Jan 23 16:21:54 crc kubenswrapper[4718]: I0123 16:21:54.145293 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h6pc\" (UniqueName: \"kubernetes.io/projected/bb9906a3-196e-410d-b9ff-5fd05978af15-kube-api-access-8h6pc\") pod \"prometheus-operator-db54df47d-88bcb\" (UID: \"bb9906a3-196e-410d-b9ff-5fd05978af15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-88bcb" Jan 23 16:21:54 crc kubenswrapper[4718]: I0123 16:21:54.180275 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-88bcb" Jan 23 16:21:54 crc kubenswrapper[4718]: I0123 16:21:54.659845 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-88bcb"] Jan 23 16:21:54 crc kubenswrapper[4718]: W0123 16:21:54.667830 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb9906a3_196e_410d_b9ff_5fd05978af15.slice/crio-9879dc9ebc181e020d9bf7c50cc0c359baeca86d29aacca1c3e819ae0f3895f0 WatchSource:0}: Error finding container 9879dc9ebc181e020d9bf7c50cc0c359baeca86d29aacca1c3e819ae0f3895f0: Status 404 returned error can't find the container with id 9879dc9ebc181e020d9bf7c50cc0c359baeca86d29aacca1c3e819ae0f3895f0 Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.074864 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vkwhk"] Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.076265 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.095166 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vkwhk"] Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.248267 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.248439 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/28e242fd-d251-4f42-8db6-44948d62ad87-registry-certificates\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.248477 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28e242fd-d251-4f42-8db6-44948d62ad87-bound-sa-token\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.248536 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/28e242fd-d251-4f42-8db6-44948d62ad87-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.248576 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzzxg\" (UniqueName: \"kubernetes.io/projected/28e242fd-d251-4f42-8db6-44948d62ad87-kube-api-access-zzzxg\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.248622 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/28e242fd-d251-4f42-8db6-44948d62ad87-registry-tls\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.248688 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28e242fd-d251-4f42-8db6-44948d62ad87-trusted-ca\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.248818 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/28e242fd-d251-4f42-8db6-44948d62ad87-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.270395 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.350119 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/28e242fd-d251-4f42-8db6-44948d62ad87-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.350263 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28e242fd-d251-4f42-8db6-44948d62ad87-bound-sa-token\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.350325 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/28e242fd-d251-4f42-8db6-44948d62ad87-registry-certificates\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.350366 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/28e242fd-d251-4f42-8db6-44948d62ad87-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.350407 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzzxg\" (UniqueName: \"kubernetes.io/projected/28e242fd-d251-4f42-8db6-44948d62ad87-kube-api-access-zzzxg\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.350457 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/28e242fd-d251-4f42-8db6-44948d62ad87-registry-tls\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.350499 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28e242fd-d251-4f42-8db6-44948d62ad87-trusted-ca\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.352262 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/28e242fd-d251-4f42-8db6-44948d62ad87-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.353668 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28e242fd-d251-4f42-8db6-44948d62ad87-trusted-ca\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.354133 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/28e242fd-d251-4f42-8db6-44948d62ad87-registry-certificates\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.357181 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/28e242fd-d251-4f42-8db6-44948d62ad87-registry-tls\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.361219 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/28e242fd-d251-4f42-8db6-44948d62ad87-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.369449 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzzxg\" (UniqueName: \"kubernetes.io/projected/28e242fd-d251-4f42-8db6-44948d62ad87-kube-api-access-zzzxg\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.369792 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28e242fd-d251-4f42-8db6-44948d62ad87-bound-sa-token\") pod \"image-registry-66df7c8f76-vkwhk\" (UID: \"28e242fd-d251-4f42-8db6-44948d62ad87\") " pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.448321 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.571418 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-88bcb" event={"ID":"bb9906a3-196e-410d-b9ff-5fd05978af15","Type":"ContainerStarted","Data":"9879dc9ebc181e020d9bf7c50cc0c359baeca86d29aacca1c3e819ae0f3895f0"} Jan 23 16:21:55 crc kubenswrapper[4718]: I0123 16:21:55.888244 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vkwhk"] Jan 23 16:21:55 crc kubenswrapper[4718]: W0123 16:21:55.898513 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28e242fd_d251_4f42_8db6_44948d62ad87.slice/crio-866a5a9947458809d2f5d044902e25836b00fdfeaf1b227c9589b6414b3dda26 WatchSource:0}: Error finding container 866a5a9947458809d2f5d044902e25836b00fdfeaf1b227c9589b6414b3dda26: Status 404 returned error can't find the container with id 866a5a9947458809d2f5d044902e25836b00fdfeaf1b227c9589b6414b3dda26 Jan 23 16:21:56 crc kubenswrapper[4718]: I0123 16:21:56.579806 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" event={"ID":"28e242fd-d251-4f42-8db6-44948d62ad87","Type":"ContainerStarted","Data":"7ead9fc74b50cb415c4785b4f947ec9cd120837d1de38ef1249ff390aa55e3cd"} Jan 23 16:21:56 crc kubenswrapper[4718]: I0123 16:21:56.580148 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" event={"ID":"28e242fd-d251-4f42-8db6-44948d62ad87","Type":"ContainerStarted","Data":"866a5a9947458809d2f5d044902e25836b00fdfeaf1b227c9589b6414b3dda26"} Jan 23 16:21:56 crc kubenswrapper[4718]: I0123 16:21:56.580615 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:21:56 crc kubenswrapper[4718]: I0123 16:21:56.609158 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" podStartSLOduration=1.6091353609999999 podStartE2EDuration="1.609135361s" podCreationTimestamp="2026-01-23 16:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:21:56.603281686 +0000 UTC m=+317.750523677" watchObservedRunningTime="2026-01-23 16:21:56.609135361 +0000 UTC m=+317.756377352" Jan 23 16:21:56 crc kubenswrapper[4718]: I0123 16:21:56.634107 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm"] Jan 23 16:21:56 crc kubenswrapper[4718]: I0123 16:21:56.634371 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" podUID="362beb53-35c5-471b-9af1-ea5f9492b776" containerName="route-controller-manager" containerID="cri-o://8d843c2a2a25c7229dabaa941ae950037ed9e12dad87310617e14f42e3a0c166" gracePeriod=30 Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.588511 4718 generic.go:334] "Generic (PLEG): container finished" podID="362beb53-35c5-471b-9af1-ea5f9492b776" containerID="8d843c2a2a25c7229dabaa941ae950037ed9e12dad87310617e14f42e3a0c166" exitCode=0 Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.588809 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" event={"ID":"362beb53-35c5-471b-9af1-ea5f9492b776","Type":"ContainerDied","Data":"8d843c2a2a25c7229dabaa941ae950037ed9e12dad87310617e14f42e3a0c166"} Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.876184 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.902072 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l"] Jan 23 16:21:57 crc kubenswrapper[4718]: E0123 16:21:57.902323 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="362beb53-35c5-471b-9af1-ea5f9492b776" containerName="route-controller-manager" Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.902339 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="362beb53-35c5-471b-9af1-ea5f9492b776" containerName="route-controller-manager" Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.902449 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="362beb53-35c5-471b-9af1-ea5f9492b776" containerName="route-controller-manager" Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.902914 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.962596 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l"] Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.987349 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/362beb53-35c5-471b-9af1-ea5f9492b776-config\") pod \"362beb53-35c5-471b-9af1-ea5f9492b776\" (UID: \"362beb53-35c5-471b-9af1-ea5f9492b776\") " Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.987476 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/362beb53-35c5-471b-9af1-ea5f9492b776-serving-cert\") pod \"362beb53-35c5-471b-9af1-ea5f9492b776\" (UID: \"362beb53-35c5-471b-9af1-ea5f9492b776\") " Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.987543 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/362beb53-35c5-471b-9af1-ea5f9492b776-client-ca\") pod \"362beb53-35c5-471b-9af1-ea5f9492b776\" (UID: \"362beb53-35c5-471b-9af1-ea5f9492b776\") " Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.987570 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74n6v\" (UniqueName: \"kubernetes.io/projected/362beb53-35c5-471b-9af1-ea5f9492b776-kube-api-access-74n6v\") pod \"362beb53-35c5-471b-9af1-ea5f9492b776\" (UID: \"362beb53-35c5-471b-9af1-ea5f9492b776\") " Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.987816 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84a56986-ccdf-4dc6-a6d9-8fbcd6127095-serving-cert\") pod \"route-controller-manager-57667fdb67-jvw4l\" (UID: \"84a56986-ccdf-4dc6-a6d9-8fbcd6127095\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.987847 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84a56986-ccdf-4dc6-a6d9-8fbcd6127095-config\") pod \"route-controller-manager-57667fdb67-jvw4l\" (UID: \"84a56986-ccdf-4dc6-a6d9-8fbcd6127095\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.987872 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84a56986-ccdf-4dc6-a6d9-8fbcd6127095-client-ca\") pod \"route-controller-manager-57667fdb67-jvw4l\" (UID: \"84a56986-ccdf-4dc6-a6d9-8fbcd6127095\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.987911 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bcpp\" (UniqueName: \"kubernetes.io/projected/84a56986-ccdf-4dc6-a6d9-8fbcd6127095-kube-api-access-2bcpp\") pod \"route-controller-manager-57667fdb67-jvw4l\" (UID: \"84a56986-ccdf-4dc6-a6d9-8fbcd6127095\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.988305 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/362beb53-35c5-471b-9af1-ea5f9492b776-config" (OuterVolumeSpecName: "config") pod "362beb53-35c5-471b-9af1-ea5f9492b776" (UID: "362beb53-35c5-471b-9af1-ea5f9492b776"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.988575 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/362beb53-35c5-471b-9af1-ea5f9492b776-client-ca" (OuterVolumeSpecName: "client-ca") pod "362beb53-35c5-471b-9af1-ea5f9492b776" (UID: "362beb53-35c5-471b-9af1-ea5f9492b776"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.996484 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/362beb53-35c5-471b-9af1-ea5f9492b776-kube-api-access-74n6v" (OuterVolumeSpecName: "kube-api-access-74n6v") pod "362beb53-35c5-471b-9af1-ea5f9492b776" (UID: "362beb53-35c5-471b-9af1-ea5f9492b776"). InnerVolumeSpecName "kube-api-access-74n6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:21:57 crc kubenswrapper[4718]: I0123 16:21:57.996680 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/362beb53-35c5-471b-9af1-ea5f9492b776-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "362beb53-35c5-471b-9af1-ea5f9492b776" (UID: "362beb53-35c5-471b-9af1-ea5f9492b776"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.089441 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bcpp\" (UniqueName: \"kubernetes.io/projected/84a56986-ccdf-4dc6-a6d9-8fbcd6127095-kube-api-access-2bcpp\") pod \"route-controller-manager-57667fdb67-jvw4l\" (UID: \"84a56986-ccdf-4dc6-a6d9-8fbcd6127095\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.089533 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84a56986-ccdf-4dc6-a6d9-8fbcd6127095-serving-cert\") pod \"route-controller-manager-57667fdb67-jvw4l\" (UID: \"84a56986-ccdf-4dc6-a6d9-8fbcd6127095\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.089560 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84a56986-ccdf-4dc6-a6d9-8fbcd6127095-config\") pod \"route-controller-manager-57667fdb67-jvw4l\" (UID: \"84a56986-ccdf-4dc6-a6d9-8fbcd6127095\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.089587 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84a56986-ccdf-4dc6-a6d9-8fbcd6127095-client-ca\") pod \"route-controller-manager-57667fdb67-jvw4l\" (UID: \"84a56986-ccdf-4dc6-a6d9-8fbcd6127095\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.089651 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/362beb53-35c5-471b-9af1-ea5f9492b776-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.089665 4718 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/362beb53-35c5-471b-9af1-ea5f9492b776-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.089675 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74n6v\" (UniqueName: \"kubernetes.io/projected/362beb53-35c5-471b-9af1-ea5f9492b776-kube-api-access-74n6v\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.089684 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/362beb53-35c5-471b-9af1-ea5f9492b776-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.090730 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84a56986-ccdf-4dc6-a6d9-8fbcd6127095-client-ca\") pod \"route-controller-manager-57667fdb67-jvw4l\" (UID: \"84a56986-ccdf-4dc6-a6d9-8fbcd6127095\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.091223 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84a56986-ccdf-4dc6-a6d9-8fbcd6127095-config\") pod \"route-controller-manager-57667fdb67-jvw4l\" (UID: \"84a56986-ccdf-4dc6-a6d9-8fbcd6127095\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.094939 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84a56986-ccdf-4dc6-a6d9-8fbcd6127095-serving-cert\") pod \"route-controller-manager-57667fdb67-jvw4l\" (UID: \"84a56986-ccdf-4dc6-a6d9-8fbcd6127095\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.110624 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bcpp\" (UniqueName: \"kubernetes.io/projected/84a56986-ccdf-4dc6-a6d9-8fbcd6127095-kube-api-access-2bcpp\") pod \"route-controller-manager-57667fdb67-jvw4l\" (UID: \"84a56986-ccdf-4dc6-a6d9-8fbcd6127095\") " pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.216569 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.598343 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-88bcb" event={"ID":"bb9906a3-196e-410d-b9ff-5fd05978af15","Type":"ContainerStarted","Data":"18773ace992e052af60279459a22489830b915a9b207c1e88de2ddf70d2ed6e9"} Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.598873 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-88bcb" event={"ID":"bb9906a3-196e-410d-b9ff-5fd05978af15","Type":"ContainerStarted","Data":"33dfa6baffe057dd5574d9c4e803a147fe23ac581171630b0a4290138e0f74fc"} Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.600372 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" event={"ID":"362beb53-35c5-471b-9af1-ea5f9492b776","Type":"ContainerDied","Data":"929d61d29ce8e1e498c99bac753b0cc4f5920b263f6a4f666829d882fc552a78"} Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.600452 4718 scope.go:117] "RemoveContainer" containerID="8d843c2a2a25c7229dabaa941ae950037ed9e12dad87310617e14f42e3a0c166" Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.600404 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm" Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.618598 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-88bcb" podStartSLOduration=2.823966831 podStartE2EDuration="5.61857471s" podCreationTimestamp="2026-01-23 16:21:53 +0000 UTC" firstStartedPulling="2026-01-23 16:21:54.670653618 +0000 UTC m=+315.817895599" lastFinishedPulling="2026-01-23 16:21:57.465261487 +0000 UTC m=+318.612503478" observedRunningTime="2026-01-23 16:21:58.616428533 +0000 UTC m=+319.763670524" watchObservedRunningTime="2026-01-23 16:21:58.61857471 +0000 UTC m=+319.765816701" Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.639722 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm"] Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.646405 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f8c564845-zb6jm"] Jan 23 16:21:58 crc kubenswrapper[4718]: I0123 16:21:58.715448 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l"] Jan 23 16:21:59 crc kubenswrapper[4718]: I0123 16:21:59.148051 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="362beb53-35c5-471b-9af1-ea5f9492b776" path="/var/lib/kubelet/pods/362beb53-35c5-471b-9af1-ea5f9492b776/volumes" Jan 23 16:21:59 crc kubenswrapper[4718]: I0123 16:21:59.609453 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" event={"ID":"84a56986-ccdf-4dc6-a6d9-8fbcd6127095","Type":"ContainerStarted","Data":"7b71fd15545ead01bc4fd5662a066e67532dd960f134ef3547f56cb76000f67c"} Jan 23 16:21:59 crc kubenswrapper[4718]: I0123 16:21:59.609915 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" event={"ID":"84a56986-ccdf-4dc6-a6d9-8fbcd6127095","Type":"ContainerStarted","Data":"d98e3a823c0695a55fab03f52a1c604f506f813b66a64dd682769e41034ddfe2"} Jan 23 16:21:59 crc kubenswrapper[4718]: I0123 16:21:59.610086 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" Jan 23 16:21:59 crc kubenswrapper[4718]: I0123 16:21:59.615770 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" Jan 23 16:21:59 crc kubenswrapper[4718]: I0123 16:21:59.632572 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57667fdb67-jvw4l" podStartSLOduration=3.6325579599999998 podStartE2EDuration="3.63255796s" podCreationTimestamp="2026-01-23 16:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:21:59.628129762 +0000 UTC m=+320.775371763" watchObservedRunningTime="2026-01-23 16:21:59.63255796 +0000 UTC m=+320.779799961" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.302193 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms"] Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.303207 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.311191 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.311240 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-c75n8" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.311268 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.311373 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.311936 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v"] Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.313296 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.317457 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.330606 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.331252 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-dvpn2" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.336730 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms"] Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.340755 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v"] Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.347264 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-glc4p"] Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.348513 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.351318 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.351656 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.352193 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-sgnzf" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.430209 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1e5fc78e-b289-47d8-9272-bdbd6fdf2747-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-g2t8v\" (UID: \"1e5fc78e-b289-47d8-9272-bdbd6fdf2747\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.430276 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/76fe0edd-0797-4691-bbfb-0f0093cf09d9-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.430314 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1e5fc78e-b289-47d8-9272-bdbd6fdf2747-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-g2t8v\" (UID: \"1e5fc78e-b289-47d8-9272-bdbd6fdf2747\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.430457 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/4463aa84-9380-4cdd-91d6-7d33bedefecf-node-exporter-textfile\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.430558 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/4463aa84-9380-4cdd-91d6-7d33bedefecf-node-exporter-tls\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.430690 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1e5fc78e-b289-47d8-9272-bdbd6fdf2747-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-g2t8v\" (UID: \"1e5fc78e-b289-47d8-9272-bdbd6fdf2747\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.430714 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r75q\" (UniqueName: \"kubernetes.io/projected/4463aa84-9380-4cdd-91d6-7d33bedefecf-kube-api-access-7r75q\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.430766 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtzq6\" (UniqueName: \"kubernetes.io/projected/76fe0edd-0797-4691-bbfb-0f0093cf09d9-kube-api-access-gtzq6\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.430989 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4463aa84-9380-4cdd-91d6-7d33bedefecf-sys\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.431113 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/76fe0edd-0797-4691-bbfb-0f0093cf09d9-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.431151 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/76fe0edd-0797-4691-bbfb-0f0093cf09d9-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.431193 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q84jt\" (UniqueName: \"kubernetes.io/projected/1e5fc78e-b289-47d8-9272-bdbd6fdf2747-kube-api-access-q84jt\") pod \"openshift-state-metrics-566fddb674-g2t8v\" (UID: \"1e5fc78e-b289-47d8-9272-bdbd6fdf2747\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.431288 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4463aa84-9380-4cdd-91d6-7d33bedefecf-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.431333 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/76fe0edd-0797-4691-bbfb-0f0093cf09d9-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.431406 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/4463aa84-9380-4cdd-91d6-7d33bedefecf-node-exporter-wtmp\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.431480 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/76fe0edd-0797-4691-bbfb-0f0093cf09d9-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.431509 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4463aa84-9380-4cdd-91d6-7d33bedefecf-metrics-client-ca\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.431557 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/4463aa84-9380-4cdd-91d6-7d33bedefecf-root\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533041 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4463aa84-9380-4cdd-91d6-7d33bedefecf-metrics-client-ca\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533095 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/4463aa84-9380-4cdd-91d6-7d33bedefecf-root\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533126 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1e5fc78e-b289-47d8-9272-bdbd6fdf2747-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-g2t8v\" (UID: \"1e5fc78e-b289-47d8-9272-bdbd6fdf2747\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533154 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/76fe0edd-0797-4691-bbfb-0f0093cf09d9-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533181 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1e5fc78e-b289-47d8-9272-bdbd6fdf2747-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-g2t8v\" (UID: \"1e5fc78e-b289-47d8-9272-bdbd6fdf2747\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533202 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/4463aa84-9380-4cdd-91d6-7d33bedefecf-node-exporter-textfile\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533225 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/4463aa84-9380-4cdd-91d6-7d33bedefecf-node-exporter-tls\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533253 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1e5fc78e-b289-47d8-9272-bdbd6fdf2747-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-g2t8v\" (UID: \"1e5fc78e-b289-47d8-9272-bdbd6fdf2747\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533270 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r75q\" (UniqueName: \"kubernetes.io/projected/4463aa84-9380-4cdd-91d6-7d33bedefecf-kube-api-access-7r75q\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533291 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtzq6\" (UniqueName: \"kubernetes.io/projected/76fe0edd-0797-4691-bbfb-0f0093cf09d9-kube-api-access-gtzq6\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533306 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4463aa84-9380-4cdd-91d6-7d33bedefecf-sys\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533327 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/76fe0edd-0797-4691-bbfb-0f0093cf09d9-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533315 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/4463aa84-9380-4cdd-91d6-7d33bedefecf-root\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533350 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/76fe0edd-0797-4691-bbfb-0f0093cf09d9-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:00 crc kubenswrapper[4718]: E0123 16:22:00.533456 4718 secret.go:188] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Jan 23 16:22:00 crc kubenswrapper[4718]: E0123 16:22:00.533466 4718 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533489 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q84jt\" (UniqueName: \"kubernetes.io/projected/1e5fc78e-b289-47d8-9272-bdbd6fdf2747-kube-api-access-q84jt\") pod \"openshift-state-metrics-566fddb674-g2t8v\" (UID: \"1e5fc78e-b289-47d8-9272-bdbd6fdf2747\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" Jan 23 16:22:00 crc kubenswrapper[4718]: E0123 16:22:00.533516 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76fe0edd-0797-4691-bbfb-0f0093cf09d9-kube-state-metrics-tls podName:76fe0edd-0797-4691-bbfb-0f0093cf09d9 nodeName:}" failed. No retries permitted until 2026-01-23 16:22:01.03349681 +0000 UTC m=+322.180738801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/76fe0edd-0797-4691-bbfb-0f0093cf09d9-kube-state-metrics-tls") pod "kube-state-metrics-777cb5bd5d-ldtms" (UID: "76fe0edd-0797-4691-bbfb-0f0093cf09d9") : secret "kube-state-metrics-tls" not found Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533502 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4463aa84-9380-4cdd-91d6-7d33bedefecf-sys\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: E0123 16:22:00.533553 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4463aa84-9380-4cdd-91d6-7d33bedefecf-node-exporter-tls podName:4463aa84-9380-4cdd-91d6-7d33bedefecf nodeName:}" failed. No retries permitted until 2026-01-23 16:22:01.03353114 +0000 UTC m=+322.180773131 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/4463aa84-9380-4cdd-91d6-7d33bedefecf-node-exporter-tls") pod "node-exporter-glc4p" (UID: "4463aa84-9380-4cdd-91d6-7d33bedefecf") : secret "node-exporter-tls" not found Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533574 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4463aa84-9380-4cdd-91d6-7d33bedefecf-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533610 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/76fe0edd-0797-4691-bbfb-0f0093cf09d9-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533658 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/4463aa84-9380-4cdd-91d6-7d33bedefecf-node-exporter-wtmp\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533715 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/76fe0edd-0797-4691-bbfb-0f0093cf09d9-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.533992 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/4463aa84-9380-4cdd-91d6-7d33bedefecf-node-exporter-textfile\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.534036 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/4463aa84-9380-4cdd-91d6-7d33bedefecf-node-exporter-wtmp\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.534128 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4463aa84-9380-4cdd-91d6-7d33bedefecf-metrics-client-ca\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.534154 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/76fe0edd-0797-4691-bbfb-0f0093cf09d9-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.534416 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/76fe0edd-0797-4691-bbfb-0f0093cf09d9-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.534545 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/76fe0edd-0797-4691-bbfb-0f0093cf09d9-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.534896 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1e5fc78e-b289-47d8-9272-bdbd6fdf2747-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-g2t8v\" (UID: \"1e5fc78e-b289-47d8-9272-bdbd6fdf2747\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.556667 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r75q\" (UniqueName: \"kubernetes.io/projected/4463aa84-9380-4cdd-91d6-7d33bedefecf-kube-api-access-7r75q\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.558428 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtzq6\" (UniqueName: \"kubernetes.io/projected/76fe0edd-0797-4691-bbfb-0f0093cf09d9-kube-api-access-gtzq6\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.560430 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q84jt\" (UniqueName: \"kubernetes.io/projected/1e5fc78e-b289-47d8-9272-bdbd6fdf2747-kube-api-access-q84jt\") pod \"openshift-state-metrics-566fddb674-g2t8v\" (UID: \"1e5fc78e-b289-47d8-9272-bdbd6fdf2747\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.561532 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4463aa84-9380-4cdd-91d6-7d33bedefecf-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.562476 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1e5fc78e-b289-47d8-9272-bdbd6fdf2747-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-g2t8v\" (UID: \"1e5fc78e-b289-47d8-9272-bdbd6fdf2747\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.563796 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/76fe0edd-0797-4691-bbfb-0f0093cf09d9-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.564199 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1e5fc78e-b289-47d8-9272-bdbd6fdf2747-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-g2t8v\" (UID: \"1e5fc78e-b289-47d8-9272-bdbd6fdf2747\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" Jan 23 16:22:00 crc kubenswrapper[4718]: I0123 16:22:00.627131 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.040891 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/4463aa84-9380-4cdd-91d6-7d33bedefecf-node-exporter-tls\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.040963 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/76fe0edd-0797-4691-bbfb-0f0093cf09d9-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.048024 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/76fe0edd-0797-4691-bbfb-0f0093cf09d9-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-ldtms\" (UID: \"76fe0edd-0797-4691-bbfb-0f0093cf09d9\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.049886 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/4463aa84-9380-4cdd-91d6-7d33bedefecf-node-exporter-tls\") pod \"node-exporter-glc4p\" (UID: \"4463aa84-9380-4cdd-91d6-7d33bedefecf\") " pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.134644 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v"] Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.222239 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.266271 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-glc4p" Jan 23 16:22:01 crc kubenswrapper[4718]: W0123 16:22:01.317213 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4463aa84_9380_4cdd_91d6_7d33bedefecf.slice/crio-cd52ce4fa0c006c305cf4894619a390123b497a8bf9cfc8e66cb4ef8981cb698 WatchSource:0}: Error finding container cd52ce4fa0c006c305cf4894619a390123b497a8bf9cfc8e66cb4ef8981cb698: Status 404 returned error can't find the container with id cd52ce4fa0c006c305cf4894619a390123b497a8bf9cfc8e66cb4ef8981cb698 Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.386958 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.390788 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.396041 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.396294 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.396447 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.396583 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.396752 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.399017 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-d5r2l" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.400048 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.400069 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.400669 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.422587 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.553579 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/89d4128c-72d8-472a-a025-8ded46bb5b70-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.553725 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/89d4128c-72d8-472a-a025-8ded46bb5b70-tls-assets\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.553790 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.553829 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-web-config\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.553877 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv659\" (UniqueName: \"kubernetes.io/projected/89d4128c-72d8-472a-a025-8ded46bb5b70-kube-api-access-nv659\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.553910 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89d4128c-72d8-472a-a025-8ded46bb5b70-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.553935 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-config-volume\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.553969 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/89d4128c-72d8-472a-a025-8ded46bb5b70-config-out\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.554146 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.554293 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.554326 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.554358 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/89d4128c-72d8-472a-a025-8ded46bb5b70-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.620579 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-glc4p" event={"ID":"4463aa84-9380-4cdd-91d6-7d33bedefecf","Type":"ContainerStarted","Data":"cd52ce4fa0c006c305cf4894619a390123b497a8bf9cfc8e66cb4ef8981cb698"} Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.624395 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" event={"ID":"1e5fc78e-b289-47d8-9272-bdbd6fdf2747","Type":"ContainerStarted","Data":"5d23c146cbf4b196194971578e9dd329433c243b2a312ddf55ff649eeda0ef1e"} Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.624429 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" event={"ID":"1e5fc78e-b289-47d8-9272-bdbd6fdf2747","Type":"ContainerStarted","Data":"8720d3a62d7f48ccff867bb3ff8f8d954ad444b288ef7751fdca70f45b71855f"} Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.624442 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" event={"ID":"1e5fc78e-b289-47d8-9272-bdbd6fdf2747","Type":"ContainerStarted","Data":"54e736266055629ffef66efeb9b5b48170b3d72a89940fde333b11f134abc1c5"} Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.655729 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv659\" (UniqueName: \"kubernetes.io/projected/89d4128c-72d8-472a-a025-8ded46bb5b70-kube-api-access-nv659\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.655778 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89d4128c-72d8-472a-a025-8ded46bb5b70-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.655805 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-config-volume\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.655834 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/89d4128c-72d8-472a-a025-8ded46bb5b70-config-out\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.655864 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.655891 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.655910 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.655934 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/89d4128c-72d8-472a-a025-8ded46bb5b70-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.655958 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/89d4128c-72d8-472a-a025-8ded46bb5b70-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.655976 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/89d4128c-72d8-472a-a025-8ded46bb5b70-tls-assets\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.656003 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.656029 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-web-config\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: E0123 16:22:01.657227 4718 secret.go:188] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Jan 23 16:22:01 crc kubenswrapper[4718]: E0123 16:22:01.657331 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-secret-alertmanager-main-tls podName:89d4128c-72d8-472a-a025-8ded46bb5b70 nodeName:}" failed. No retries permitted until 2026-01-23 16:22:02.157304552 +0000 UTC m=+323.304546543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "89d4128c-72d8-472a-a025-8ded46bb5b70") : secret "alertmanager-main-tls" not found Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.657798 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/89d4128c-72d8-472a-a025-8ded46bb5b70-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.658109 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/89d4128c-72d8-472a-a025-8ded46bb5b70-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.661536 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89d4128c-72d8-472a-a025-8ded46bb5b70-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.662968 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.663050 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.663086 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-config-volume\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.663329 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-web-config\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.663439 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/89d4128c-72d8-472a-a025-8ded46bb5b70-config-out\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.669712 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.670140 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/89d4128c-72d8-472a-a025-8ded46bb5b70-tls-assets\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.673483 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv659\" (UniqueName: \"kubernetes.io/projected/89d4128c-72d8-472a-a025-8ded46bb5b70-kube-api-access-nv659\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:01 crc kubenswrapper[4718]: I0123 16:22:01.764766 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms"] Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.164266 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.168563 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/89d4128c-72d8-472a-a025-8ded46bb5b70-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"89d4128c-72d8-472a-a025-8ded46bb5b70\") " pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.337454 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.355617 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz"] Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.357967 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.360747 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.361122 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.361993 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-bcr60g6t8f9nf" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.362319 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-jwhhm" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.364749 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.365416 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.365579 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.375849 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz"] Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.471121 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.471740 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-metrics-client-ca\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.471792 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-secret-grpc-tls\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.471826 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.471852 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6njd5\" (UniqueName: \"kubernetes.io/projected/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-kube-api-access-6njd5\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.471956 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.472335 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.472405 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-secret-thanos-querier-tls\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.573908 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-secret-thanos-querier-tls\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.573959 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.574003 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-metrics-client-ca\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.574040 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-secret-grpc-tls\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.574075 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.574100 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6njd5\" (UniqueName: \"kubernetes.io/projected/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-kube-api-access-6njd5\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.574143 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.574190 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.576070 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-metrics-client-ca\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.579622 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-secret-thanos-querier-tls\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.579798 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.579900 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.581593 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.591148 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-secret-grpc-tls\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.594561 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6njd5\" (UniqueName: \"kubernetes.io/projected/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-kube-api-access-6njd5\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.606812 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/b4fd2138-8b95-4e9b-992a-d368d2f2ea94-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-f7bdc8bf4-sj8vz\" (UID: \"b4fd2138-8b95-4e9b-992a-d368d2f2ea94\") " pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.631521 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" event={"ID":"76fe0edd-0797-4691-bbfb-0f0093cf09d9","Type":"ContainerStarted","Data":"61aa0f3c4bc3efc18d3d665022460ec538b3c0f6d4c6aa969cf618916ae2191a"} Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.688111 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:02 crc kubenswrapper[4718]: I0123 16:22:02.781509 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 23 16:22:03 crc kubenswrapper[4718]: I0123 16:22:03.641408 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89d4128c-72d8-472a-a025-8ded46bb5b70","Type":"ContainerStarted","Data":"319c6936529d44a6fa2983f043c7bbacce0a7a8c0ece7038e559631f7e81cf24"} Jan 23 16:22:04 crc kubenswrapper[4718]: I0123 16:22:04.137498 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz"] Jan 23 16:22:04 crc kubenswrapper[4718]: W0123 16:22:04.146953 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4fd2138_8b95_4e9b_992a_d368d2f2ea94.slice/crio-4e6e0c0f8c84cde5cf7301b0ece35f432b12188eafe891df840be6df5fc8a2d5 WatchSource:0}: Error finding container 4e6e0c0f8c84cde5cf7301b0ece35f432b12188eafe891df840be6df5fc8a2d5: Status 404 returned error can't find the container with id 4e6e0c0f8c84cde5cf7301b0ece35f432b12188eafe891df840be6df5fc8a2d5 Jan 23 16:22:04 crc kubenswrapper[4718]: I0123 16:22:04.648237 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" event={"ID":"1e5fc78e-b289-47d8-9272-bdbd6fdf2747","Type":"ContainerStarted","Data":"9dd3c8912cc512e018192474d11a2e23caf7d6dbf12cfcaa6270841419666420"} Jan 23 16:22:04 crc kubenswrapper[4718]: I0123 16:22:04.651868 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" event={"ID":"b4fd2138-8b95-4e9b-992a-d368d2f2ea94","Type":"ContainerStarted","Data":"4e6e0c0f8c84cde5cf7301b0ece35f432b12188eafe891df840be6df5fc8a2d5"} Jan 23 16:22:04 crc kubenswrapper[4718]: I0123 16:22:04.664865 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" event={"ID":"76fe0edd-0797-4691-bbfb-0f0093cf09d9","Type":"ContainerStarted","Data":"032ff154171ea1dbe037d792397b967de81d1caef735dfba3ef310ea61889452"} Jan 23 16:22:04 crc kubenswrapper[4718]: I0123 16:22:04.664940 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" event={"ID":"76fe0edd-0797-4691-bbfb-0f0093cf09d9","Type":"ContainerStarted","Data":"44c6e52cddd80c1810aaa387f921c27c61fcddbdb94dc80c0dc0b7f4fffb1415"} Jan 23 16:22:04 crc kubenswrapper[4718]: I0123 16:22:04.664954 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" event={"ID":"76fe0edd-0797-4691-bbfb-0f0093cf09d9","Type":"ContainerStarted","Data":"927ec029e65b8264a5acf67396f4877ccc0154fd14f75fc37295abeb667ed4d7"} Jan 23 16:22:04 crc kubenswrapper[4718]: I0123 16:22:04.668262 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g2t8v" podStartSLOduration=2.520842054 podStartE2EDuration="4.668250591s" podCreationTimestamp="2026-01-23 16:22:00 +0000 UTC" firstStartedPulling="2026-01-23 16:22:01.552932663 +0000 UTC m=+322.700174654" lastFinishedPulling="2026-01-23 16:22:03.7003412 +0000 UTC m=+324.847583191" observedRunningTime="2026-01-23 16:22:04.667162052 +0000 UTC m=+325.814404043" watchObservedRunningTime="2026-01-23 16:22:04.668250591 +0000 UTC m=+325.815492582" Jan 23 16:22:04 crc kubenswrapper[4718]: I0123 16:22:04.670836 4718 generic.go:334] "Generic (PLEG): container finished" podID="4463aa84-9380-4cdd-91d6-7d33bedefecf" containerID="6a5356cf0c297d081004c9aec5472856b9da9252d98894654deb9fe0a1918e66" exitCode=0 Jan 23 16:22:04 crc kubenswrapper[4718]: I0123 16:22:04.670889 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-glc4p" event={"ID":"4463aa84-9380-4cdd-91d6-7d33bedefecf","Type":"ContainerDied","Data":"6a5356cf0c297d081004c9aec5472856b9da9252d98894654deb9fe0a1918e66"} Jan 23 16:22:04 crc kubenswrapper[4718]: I0123 16:22:04.696890 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ldtms" podStartSLOduration=2.744583 podStartE2EDuration="4.696866807s" podCreationTimestamp="2026-01-23 16:22:00 +0000 UTC" firstStartedPulling="2026-01-23 16:22:01.777015888 +0000 UTC m=+322.924257889" lastFinishedPulling="2026-01-23 16:22:03.729299705 +0000 UTC m=+324.876541696" observedRunningTime="2026-01-23 16:22:04.694727581 +0000 UTC m=+325.841969582" watchObservedRunningTime="2026-01-23 16:22:04.696866807 +0000 UTC m=+325.844108808" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.665427 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-67b49ccc4f-bxbv9"] Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.667501 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.672961 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.673295 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-3ql31oo3289jt" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.673444 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-ms22f" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.673327 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.673357 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.673823 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.704995 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-67b49ccc4f-bxbv9"] Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.715737 4718 generic.go:334] "Generic (PLEG): container finished" podID="89d4128c-72d8-472a-a025-8ded46bb5b70" containerID="60b0cc9e3f1583405954f3732f58e8057e10106429bd4abd517f8361f6b0f0c3" exitCode=0 Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.715818 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89d4128c-72d8-472a-a025-8ded46bb5b70","Type":"ContainerDied","Data":"60b0cc9e3f1583405954f3732f58e8057e10106429bd4abd517f8361f6b0f0c3"} Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.722087 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-glc4p" event={"ID":"4463aa84-9380-4cdd-91d6-7d33bedefecf","Type":"ContainerStarted","Data":"d20bead10529aa782e195ab3184a5c09e39c2f51efbf9a952105c783cd542210"} Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.722114 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-glc4p" event={"ID":"4463aa84-9380-4cdd-91d6-7d33bedefecf","Type":"ContainerStarted","Data":"21cde03228653297deaa760d8168fdb05728119ed144a480456fffc77ea73098"} Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.732426 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba11b9fa-937a-42f1-9559-79397077a342-secret-metrics-client-certs\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.732486 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba11b9fa-937a-42f1-9559-79397077a342-secret-metrics-server-tls\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.732517 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba11b9fa-937a-42f1-9559-79397077a342-client-ca-bundle\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.732609 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba11b9fa-937a-42f1-9559-79397077a342-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.732654 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ba11b9fa-937a-42f1-9559-79397077a342-audit-log\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.732676 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6lp9\" (UniqueName: \"kubernetes.io/projected/ba11b9fa-937a-42f1-9559-79397077a342-kube-api-access-x6lp9\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.732700 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba11b9fa-937a-42f1-9559-79397077a342-metrics-server-audit-profiles\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.775038 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-glc4p" podStartSLOduration=3.406865539 podStartE2EDuration="5.775007413s" podCreationTimestamp="2026-01-23 16:22:00 +0000 UTC" firstStartedPulling="2026-01-23 16:22:01.329616768 +0000 UTC m=+322.476858759" lastFinishedPulling="2026-01-23 16:22:03.697758632 +0000 UTC m=+324.845000633" observedRunningTime="2026-01-23 16:22:05.768687666 +0000 UTC m=+326.915929677" watchObservedRunningTime="2026-01-23 16:22:05.775007413 +0000 UTC m=+326.922249424" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.834195 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba11b9fa-937a-42f1-9559-79397077a342-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.834300 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ba11b9fa-937a-42f1-9559-79397077a342-audit-log\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.834332 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6lp9\" (UniqueName: \"kubernetes.io/projected/ba11b9fa-937a-42f1-9559-79397077a342-kube-api-access-x6lp9\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.834363 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba11b9fa-937a-42f1-9559-79397077a342-metrics-server-audit-profiles\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.834479 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba11b9fa-937a-42f1-9559-79397077a342-secret-metrics-client-certs\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.834709 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba11b9fa-937a-42f1-9559-79397077a342-secret-metrics-server-tls\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.836029 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba11b9fa-937a-42f1-9559-79397077a342-client-ca-bundle\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.838367 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba11b9fa-937a-42f1-9559-79397077a342-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.838774 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ba11b9fa-937a-42f1-9559-79397077a342-audit-log\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.839463 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba11b9fa-937a-42f1-9559-79397077a342-metrics-server-audit-profiles\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.851618 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba11b9fa-937a-42f1-9559-79397077a342-secret-metrics-server-tls\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.863125 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6lp9\" (UniqueName: \"kubernetes.io/projected/ba11b9fa-937a-42f1-9559-79397077a342-kube-api-access-x6lp9\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.863259 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba11b9fa-937a-42f1-9559-79397077a342-client-ca-bundle\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:05 crc kubenswrapper[4718]: I0123 16:22:05.882065 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba11b9fa-937a-42f1-9559-79397077a342-secret-metrics-client-certs\") pod \"metrics-server-67b49ccc4f-bxbv9\" (UID: \"ba11b9fa-937a-42f1-9559-79397077a342\") " pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.023754 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-59b4b6865b-tqr8l"] Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.024692 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-59b4b6865b-tqr8l" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.027865 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.029668 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.039693 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.040985 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-59b4b6865b-tqr8l"] Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.155022 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54723299-2c07-40aa-a218-99c5317e84c5-monitoring-plugin-cert\") pod \"monitoring-plugin-59b4b6865b-tqr8l\" (UID: \"54723299-2c07-40aa-a218-99c5317e84c5\") " pod="openshift-monitoring/monitoring-plugin-59b4b6865b-tqr8l" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.260113 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54723299-2c07-40aa-a218-99c5317e84c5-monitoring-plugin-cert\") pod \"monitoring-plugin-59b4b6865b-tqr8l\" (UID: \"54723299-2c07-40aa-a218-99c5317e84c5\") " pod="openshift-monitoring/monitoring-plugin-59b4b6865b-tqr8l" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.267769 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54723299-2c07-40aa-a218-99c5317e84c5-monitoring-plugin-cert\") pod \"monitoring-plugin-59b4b6865b-tqr8l\" (UID: \"54723299-2c07-40aa-a218-99c5317e84c5\") " pod="openshift-monitoring/monitoring-plugin-59b4b6865b-tqr8l" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.354428 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-59b4b6865b-tqr8l" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.686586 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.688940 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.691920 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.692809 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.693165 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.693331 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.693655 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.694287 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.694429 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.694614 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.694776 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.694948 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-6qsg4288lrtss" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.696795 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-7r2fb" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.697804 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.703429 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.721183 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.768216 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-web-config\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.768276 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.768320 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.768362 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.768384 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-config-out\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.768404 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.768422 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.768443 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.768461 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-config\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.768489 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.768523 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.768546 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.768570 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7588g\" (UniqueName: \"kubernetes.io/projected/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-kube-api-access-7588g\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.768594 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.768614 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.768650 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.768685 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.768705 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870068 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870111 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870133 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-config-out\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870151 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870172 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870195 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870215 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-config\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870242 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870282 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870302 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870333 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7588g\" (UniqueName: \"kubernetes.io/projected/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-kube-api-access-7588g\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870353 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870375 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870391 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870411 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870432 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870453 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-web-config\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870472 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.870672 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.871470 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.871979 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.872106 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.873413 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.878138 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.878369 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.879259 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-config-out\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.879416 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.879686 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.879945 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-config\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.880216 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.880382 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-web-config\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.880987 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.881190 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.886966 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.887062 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:06 crc kubenswrapper[4718]: I0123 16:22:06.893375 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7588g\" (UniqueName: \"kubernetes.io/projected/5a8875e9-a37e-4b44-a63d-88cbbd2aaefa-kube-api-access-7588g\") pod \"prometheus-k8s-0\" (UID: \"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:07 crc kubenswrapper[4718]: I0123 16:22:07.018695 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:07 crc kubenswrapper[4718]: I0123 16:22:07.462643 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-59b4b6865b-tqr8l"] Jan 23 16:22:07 crc kubenswrapper[4718]: I0123 16:22:07.482147 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-67b49ccc4f-bxbv9"] Jan 23 16:22:07 crc kubenswrapper[4718]: W0123 16:22:07.487701 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba11b9fa_937a_42f1_9559_79397077a342.slice/crio-e10a01aa087136d4b03ee2aa8d704f9477944267f488da30f66d03a4c532e00d WatchSource:0}: Error finding container e10a01aa087136d4b03ee2aa8d704f9477944267f488da30f66d03a4c532e00d: Status 404 returned error can't find the container with id e10a01aa087136d4b03ee2aa8d704f9477944267f488da30f66d03a4c532e00d Jan 23 16:22:07 crc kubenswrapper[4718]: I0123 16:22:07.618940 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 23 16:22:07 crc kubenswrapper[4718]: W0123 16:22:07.626166 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a8875e9_a37e_4b44_a63d_88cbbd2aaefa.slice/crio-8ea0d32d10e5d20840b84fdcad07df8a9cf9d2e74e56bf7e736fba7fbf4ff35c WatchSource:0}: Error finding container 8ea0d32d10e5d20840b84fdcad07df8a9cf9d2e74e56bf7e736fba7fbf4ff35c: Status 404 returned error can't find the container with id 8ea0d32d10e5d20840b84fdcad07df8a9cf9d2e74e56bf7e736fba7fbf4ff35c Jan 23 16:22:07 crc kubenswrapper[4718]: I0123 16:22:07.740474 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa","Type":"ContainerStarted","Data":"8ea0d32d10e5d20840b84fdcad07df8a9cf9d2e74e56bf7e736fba7fbf4ff35c"} Jan 23 16:22:07 crc kubenswrapper[4718]: I0123 16:22:07.741725 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-59b4b6865b-tqr8l" event={"ID":"54723299-2c07-40aa-a218-99c5317e84c5","Type":"ContainerStarted","Data":"3996c093a3bc0bca6e9b8d7a765e7c4ae0db09de26a4d27e26ff5216abb86086"} Jan 23 16:22:07 crc kubenswrapper[4718]: I0123 16:22:07.744106 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" event={"ID":"b4fd2138-8b95-4e9b-992a-d368d2f2ea94","Type":"ContainerStarted","Data":"da3e1e4ce4adc2d8a853a85ff1b1227779831eeb3b081bb4986adfd15f4f7206"} Jan 23 16:22:07 crc kubenswrapper[4718]: I0123 16:22:07.744154 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" event={"ID":"b4fd2138-8b95-4e9b-992a-d368d2f2ea94","Type":"ContainerStarted","Data":"f55a329ef0dfc5053cb41d0c58b1f580388a70df74d7aa2ebec2aedded56f081"} Jan 23 16:22:07 crc kubenswrapper[4718]: I0123 16:22:07.744166 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" event={"ID":"b4fd2138-8b95-4e9b-992a-d368d2f2ea94","Type":"ContainerStarted","Data":"3928b2e73dbb134cb7c355116b2f97d7831f769fe324ea7da47c8f43e19451c5"} Jan 23 16:22:07 crc kubenswrapper[4718]: I0123 16:22:07.745113 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" event={"ID":"ba11b9fa-937a-42f1-9559-79397077a342","Type":"ContainerStarted","Data":"e10a01aa087136d4b03ee2aa8d704f9477944267f488da30f66d03a4c532e00d"} Jan 23 16:22:08 crc kubenswrapper[4718]: I0123 16:22:08.758246 4718 generic.go:334] "Generic (PLEG): container finished" podID="5a8875e9-a37e-4b44-a63d-88cbbd2aaefa" containerID="364fd80f8545878386fafcf46d3045a7e2f09d12be5097561378ffbcbd65c691" exitCode=0 Jan 23 16:22:08 crc kubenswrapper[4718]: I0123 16:22:08.758333 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa","Type":"ContainerDied","Data":"364fd80f8545878386fafcf46d3045a7e2f09d12be5097561378ffbcbd65c691"} Jan 23 16:22:08 crc kubenswrapper[4718]: I0123 16:22:08.762660 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89d4128c-72d8-472a-a025-8ded46bb5b70","Type":"ContainerStarted","Data":"cb12ae3675c08c8995448e848d1d41cf696954c5a7c93cde24781f10bdd8924d"} Jan 23 16:22:09 crc kubenswrapper[4718]: I0123 16:22:09.779472 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89d4128c-72d8-472a-a025-8ded46bb5b70","Type":"ContainerStarted","Data":"d6083e9edc401f3a23e897349027e273d585dd1b6fe70c1da212edb1cf9950ce"} Jan 23 16:22:10 crc kubenswrapper[4718]: I0123 16:22:10.787348 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" event={"ID":"ba11b9fa-937a-42f1-9559-79397077a342","Type":"ContainerStarted","Data":"ecbe354a434f4e11fd61b136de522d24a1ad016352e36f9fe210941c54a78c5f"} Jan 23 16:22:10 crc kubenswrapper[4718]: I0123 16:22:10.790323 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-59b4b6865b-tqr8l" event={"ID":"54723299-2c07-40aa-a218-99c5317e84c5","Type":"ContainerStarted","Data":"cb095c668a4e83e5d3bd5560b5a21bbfd954964ad3cabaaf34b98f8b402f944d"} Jan 23 16:22:10 crc kubenswrapper[4718]: I0123 16:22:10.793099 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-59b4b6865b-tqr8l" Jan 23 16:22:10 crc kubenswrapper[4718]: I0123 16:22:10.797205 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-59b4b6865b-tqr8l" Jan 23 16:22:10 crc kubenswrapper[4718]: I0123 16:22:10.799608 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89d4128c-72d8-472a-a025-8ded46bb5b70","Type":"ContainerStarted","Data":"0a29e883c3987646c61f54755edb1438e65c74227115af1a0e9c8a9746288006"} Jan 23 16:22:10 crc kubenswrapper[4718]: I0123 16:22:10.799687 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89d4128c-72d8-472a-a025-8ded46bb5b70","Type":"ContainerStarted","Data":"47a4a8ec517df35bf7a5e876d6adbfcc1643294449837cc6f1e6c8e8e9c75eb0"} Jan 23 16:22:10 crc kubenswrapper[4718]: I0123 16:22:10.799699 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89d4128c-72d8-472a-a025-8ded46bb5b70","Type":"ContainerStarted","Data":"6158b9901d46747a425f8790527655781c858446b4bd1b389d2e5d163fab0e29"} Jan 23 16:22:10 crc kubenswrapper[4718]: I0123 16:22:10.799710 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89d4128c-72d8-472a-a025-8ded46bb5b70","Type":"ContainerStarted","Data":"716ba3836c7c5db9abe2a8f9ab6b642865073058eec002339f568fc8713fab93"} Jan 23 16:22:10 crc kubenswrapper[4718]: I0123 16:22:10.803216 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" event={"ID":"b4fd2138-8b95-4e9b-992a-d368d2f2ea94","Type":"ContainerStarted","Data":"68b16b70f5fe3bb96bfd6796140905327a729d6af7fe478527ce01102e2a6b8c"} Jan 23 16:22:10 crc kubenswrapper[4718]: I0123 16:22:10.803286 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" event={"ID":"b4fd2138-8b95-4e9b-992a-d368d2f2ea94","Type":"ContainerStarted","Data":"0287ad91b6b12347ed2bdc449fde49f9e03997479ce659c504b7bdb3c5d49b85"} Jan 23 16:22:10 crc kubenswrapper[4718]: I0123 16:22:10.803305 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" event={"ID":"b4fd2138-8b95-4e9b-992a-d368d2f2ea94","Type":"ContainerStarted","Data":"4978839384e91aae32f22e6d5b18fba03bf4704b47bf3a34f4a9923f1394ba62"} Jan 23 16:22:10 crc kubenswrapper[4718]: I0123 16:22:10.803532 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:10 crc kubenswrapper[4718]: I0123 16:22:10.843769 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" podStartSLOduration=3.213025474 podStartE2EDuration="5.843740304s" podCreationTimestamp="2026-01-23 16:22:05 +0000 UTC" firstStartedPulling="2026-01-23 16:22:07.498272329 +0000 UTC m=+328.645514320" lastFinishedPulling="2026-01-23 16:22:10.128987159 +0000 UTC m=+331.276229150" observedRunningTime="2026-01-23 16:22:10.813566906 +0000 UTC m=+331.960808947" watchObservedRunningTime="2026-01-23 16:22:10.843740304 +0000 UTC m=+331.990982295" Jan 23 16:22:10 crc kubenswrapper[4718]: I0123 16:22:10.892153 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=4.607236993 podStartE2EDuration="9.892130377s" podCreationTimestamp="2026-01-23 16:22:01 +0000 UTC" firstStartedPulling="2026-01-23 16:22:03.161287097 +0000 UTC m=+324.308529088" lastFinishedPulling="2026-01-23 16:22:08.446180481 +0000 UTC m=+329.593422472" observedRunningTime="2026-01-23 16:22:10.855112291 +0000 UTC m=+332.002354292" watchObservedRunningTime="2026-01-23 16:22:10.892130377 +0000 UTC m=+332.039372368" Jan 23 16:22:10 crc kubenswrapper[4718]: I0123 16:22:10.895125 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" podStartSLOduration=2.952382438 podStartE2EDuration="8.895115058s" podCreationTimestamp="2026-01-23 16:22:02 +0000 UTC" firstStartedPulling="2026-01-23 16:22:04.150368858 +0000 UTC m=+325.297610849" lastFinishedPulling="2026-01-23 16:22:10.093101458 +0000 UTC m=+331.240343469" observedRunningTime="2026-01-23 16:22:10.889180647 +0000 UTC m=+332.036422638" watchObservedRunningTime="2026-01-23 16:22:10.895115058 +0000 UTC m=+332.042357039" Jan 23 16:22:10 crc kubenswrapper[4718]: I0123 16:22:10.914413 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-59b4b6865b-tqr8l" podStartSLOduration=2.263866641 podStartE2EDuration="4.914384634s" podCreationTimestamp="2026-01-23 16:22:06 +0000 UTC" firstStartedPulling="2026-01-23 16:22:07.473672561 +0000 UTC m=+328.620914552" lastFinishedPulling="2026-01-23 16:22:10.124190544 +0000 UTC m=+331.271432545" observedRunningTime="2026-01-23 16:22:10.908298959 +0000 UTC m=+332.055540960" watchObservedRunningTime="2026-01-23 16:22:10.914384634 +0000 UTC m=+332.061626625" Jan 23 16:22:12 crc kubenswrapper[4718]: I0123 16:22:12.703421 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" Jan 23 16:22:13 crc kubenswrapper[4718]: I0123 16:22:13.831808 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa","Type":"ContainerStarted","Data":"dad7be75e110713b7132b5aeb0d8576dad065793f0ecbfb90416bd49681cf9bb"} Jan 23 16:22:13 crc kubenswrapper[4718]: I0123 16:22:13.832068 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa","Type":"ContainerStarted","Data":"ffacaa68b2c779eccdcbdc899d12153f6b4869698362442a0f726773ebcb2cd0"} Jan 23 16:22:13 crc kubenswrapper[4718]: I0123 16:22:13.832089 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa","Type":"ContainerStarted","Data":"e64d0865cf14a8235db04c20a7fd3d9a6ac6a1d4cb3d82187c12f5f8f891f6dd"} Jan 23 16:22:13 crc kubenswrapper[4718]: I0123 16:22:13.832102 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa","Type":"ContainerStarted","Data":"40fbed27e0ba2a9d87096c0959fea6a8b2b2c90ce728451df6accca7184ad0fc"} Jan 23 16:22:14 crc kubenswrapper[4718]: I0123 16:22:14.842788 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa","Type":"ContainerStarted","Data":"a3315f221ca9db7a15290ebb4523af4cc4364c41d4aa957184d4367c3f6d01b4"} Jan 23 16:22:14 crc kubenswrapper[4718]: I0123 16:22:14.842848 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5a8875e9-a37e-4b44-a63d-88cbbd2aaefa","Type":"ContainerStarted","Data":"10fb32230aaf2be661c645019f3395db93c44b5b73662238c4fa919feeb3ad50"} Jan 23 16:22:14 crc kubenswrapper[4718]: I0123 16:22:14.885846 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.563766583 podStartE2EDuration="8.885814561s" podCreationTimestamp="2026-01-23 16:22:06 +0000 UTC" firstStartedPulling="2026-01-23 16:22:08.760207649 +0000 UTC m=+329.907449640" lastFinishedPulling="2026-01-23 16:22:13.082255627 +0000 UTC m=+334.229497618" observedRunningTime="2026-01-23 16:22:14.877504148 +0000 UTC m=+336.024746179" watchObservedRunningTime="2026-01-23 16:22:14.885814561 +0000 UTC m=+336.033056582" Jan 23 16:22:15 crc kubenswrapper[4718]: I0123 16:22:15.460171 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" Jan 23 16:22:15 crc kubenswrapper[4718]: I0123 16:22:15.560327 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hmshq"] Jan 23 16:22:17 crc kubenswrapper[4718]: I0123 16:22:17.019601 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.389299 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-8449bd689b-2nbw2"] Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.390947 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.414620 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8449bd689b-2nbw2"] Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.547060 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-console-config\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.548584 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-service-ca\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.548759 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-trusted-ca-bundle\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.548883 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-oauth-serving-cert\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.548983 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2710faa3-2347-4d77-a3b1-64df149d94d4-console-oauth-config\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.549016 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8xg6\" (UniqueName: \"kubernetes.io/projected/2710faa3-2347-4d77-a3b1-64df149d94d4-kube-api-access-j8xg6\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.549097 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2710faa3-2347-4d77-a3b1-64df149d94d4-console-serving-cert\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.650752 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-console-config\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.651137 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-service-ca\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.651170 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-trusted-ca-bundle\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.651227 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-oauth-serving-cert\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.651964 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-console-config\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.652333 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-trusted-ca-bundle\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.652505 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2710faa3-2347-4d77-a3b1-64df149d94d4-console-oauth-config\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.652539 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8xg6\" (UniqueName: \"kubernetes.io/projected/2710faa3-2347-4d77-a3b1-64df149d94d4-kube-api-access-j8xg6\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.652564 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2710faa3-2347-4d77-a3b1-64df149d94d4-console-serving-cert\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.653353 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-service-ca\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.654149 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-oauth-serving-cert\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.662172 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2710faa3-2347-4d77-a3b1-64df149d94d4-console-oauth-config\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.662319 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2710faa3-2347-4d77-a3b1-64df149d94d4-console-serving-cert\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.670284 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8xg6\" (UniqueName: \"kubernetes.io/projected/2710faa3-2347-4d77-a3b1-64df149d94d4-kube-api-access-j8xg6\") pod \"console-8449bd689b-2nbw2\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:23 crc kubenswrapper[4718]: I0123 16:22:23.727843 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:24 crc kubenswrapper[4718]: I0123 16:22:24.248381 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8449bd689b-2nbw2"] Jan 23 16:22:24 crc kubenswrapper[4718]: I0123 16:22:24.946335 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8449bd689b-2nbw2" event={"ID":"2710faa3-2347-4d77-a3b1-64df149d94d4","Type":"ContainerStarted","Data":"10fddb3e403c10f36c84b5c20e789076caf7a783886de213589a6f762d6b2675"} Jan 23 16:22:26 crc kubenswrapper[4718]: I0123 16:22:26.041083 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:26 crc kubenswrapper[4718]: I0123 16:22:26.041867 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:26 crc kubenswrapper[4718]: I0123 16:22:26.963734 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8449bd689b-2nbw2" event={"ID":"2710faa3-2347-4d77-a3b1-64df149d94d4","Type":"ContainerStarted","Data":"92f48f14d93b5cd594f7b64d03684b686a7d7c4c27200a56c2cb9272b472cc61"} Jan 23 16:22:28 crc kubenswrapper[4718]: I0123 16:22:28.013242 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-8449bd689b-2nbw2" podStartSLOduration=5.013211178 podStartE2EDuration="5.013211178s" podCreationTimestamp="2026-01-23 16:22:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:22:27.998681526 +0000 UTC m=+349.145923517" watchObservedRunningTime="2026-01-23 16:22:28.013211178 +0000 UTC m=+349.160453169" Jan 23 16:22:33 crc kubenswrapper[4718]: I0123 16:22:33.728735 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:33 crc kubenswrapper[4718]: I0123 16:22:33.729916 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:33 crc kubenswrapper[4718]: I0123 16:22:33.738695 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:34 crc kubenswrapper[4718]: I0123 16:22:34.024130 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:22:34 crc kubenswrapper[4718]: I0123 16:22:34.118957 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-lt6zb"] Jan 23 16:22:40 crc kubenswrapper[4718]: I0123 16:22:40.614598 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" podUID="366c0aee-b870-49b2-8500-06f6529c270c" containerName="registry" containerID="cri-o://71ee9c0ccb8903dce1240475cbad87a4576f47f1bf575288aa1bbe04b2bffd8c" gracePeriod=30 Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.035896 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.087989 4718 generic.go:334] "Generic (PLEG): container finished" podID="366c0aee-b870-49b2-8500-06f6529c270c" containerID="71ee9c0ccb8903dce1240475cbad87a4576f47f1bf575288aa1bbe04b2bffd8c" exitCode=0 Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.088067 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" event={"ID":"366c0aee-b870-49b2-8500-06f6529c270c","Type":"ContainerDied","Data":"71ee9c0ccb8903dce1240475cbad87a4576f47f1bf575288aa1bbe04b2bffd8c"} Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.088117 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" event={"ID":"366c0aee-b870-49b2-8500-06f6529c270c","Type":"ContainerDied","Data":"17bcaa06b625a0504917524cea3fa8812bf1ca3ad92711792bfcc19a61cc985b"} Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.088152 4718 scope.go:117] "RemoveContainer" containerID="71ee9c0ccb8903dce1240475cbad87a4576f47f1bf575288aa1bbe04b2bffd8c" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.088359 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hmshq" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.110374 4718 scope.go:117] "RemoveContainer" containerID="71ee9c0ccb8903dce1240475cbad87a4576f47f1bf575288aa1bbe04b2bffd8c" Jan 23 16:22:41 crc kubenswrapper[4718]: E0123 16:22:41.111093 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71ee9c0ccb8903dce1240475cbad87a4576f47f1bf575288aa1bbe04b2bffd8c\": container with ID starting with 71ee9c0ccb8903dce1240475cbad87a4576f47f1bf575288aa1bbe04b2bffd8c not found: ID does not exist" containerID="71ee9c0ccb8903dce1240475cbad87a4576f47f1bf575288aa1bbe04b2bffd8c" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.111171 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71ee9c0ccb8903dce1240475cbad87a4576f47f1bf575288aa1bbe04b2bffd8c"} err="failed to get container status \"71ee9c0ccb8903dce1240475cbad87a4576f47f1bf575288aa1bbe04b2bffd8c\": rpc error: code = NotFound desc = could not find container \"71ee9c0ccb8903dce1240475cbad87a4576f47f1bf575288aa1bbe04b2bffd8c\": container with ID starting with 71ee9c0ccb8903dce1240475cbad87a4576f47f1bf575288aa1bbe04b2bffd8c not found: ID does not exist" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.123979 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/366c0aee-b870-49b2-8500-06f6529c270c-trusted-ca\") pod \"366c0aee-b870-49b2-8500-06f6529c270c\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.124171 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/366c0aee-b870-49b2-8500-06f6529c270c-ca-trust-extracted\") pod \"366c0aee-b870-49b2-8500-06f6529c270c\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.124205 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/366c0aee-b870-49b2-8500-06f6529c270c-installation-pull-secrets\") pod \"366c0aee-b870-49b2-8500-06f6529c270c\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.124308 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/366c0aee-b870-49b2-8500-06f6529c270c-registry-tls\") pod \"366c0aee-b870-49b2-8500-06f6529c270c\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.124489 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/366c0aee-b870-49b2-8500-06f6529c270c-bound-sa-token\") pod \"366c0aee-b870-49b2-8500-06f6529c270c\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.125787 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/366c0aee-b870-49b2-8500-06f6529c270c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "366c0aee-b870-49b2-8500-06f6529c270c" (UID: "366c0aee-b870-49b2-8500-06f6529c270c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.125878 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"366c0aee-b870-49b2-8500-06f6529c270c\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.125950 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/366c0aee-b870-49b2-8500-06f6529c270c-registry-certificates\") pod \"366c0aee-b870-49b2-8500-06f6529c270c\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.125984 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6dfj\" (UniqueName: \"kubernetes.io/projected/366c0aee-b870-49b2-8500-06f6529c270c-kube-api-access-w6dfj\") pod \"366c0aee-b870-49b2-8500-06f6529c270c\" (UID: \"366c0aee-b870-49b2-8500-06f6529c270c\") " Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.127736 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/366c0aee-b870-49b2-8500-06f6529c270c-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "366c0aee-b870-49b2-8500-06f6529c270c" (UID: "366c0aee-b870-49b2-8500-06f6529c270c"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.128181 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/366c0aee-b870-49b2-8500-06f6529c270c-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.128214 4718 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/366c0aee-b870-49b2-8500-06f6529c270c-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.134701 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/366c0aee-b870-49b2-8500-06f6529c270c-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "366c0aee-b870-49b2-8500-06f6529c270c" (UID: "366c0aee-b870-49b2-8500-06f6529c270c"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.135865 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/366c0aee-b870-49b2-8500-06f6529c270c-kube-api-access-w6dfj" (OuterVolumeSpecName: "kube-api-access-w6dfj") pod "366c0aee-b870-49b2-8500-06f6529c270c" (UID: "366c0aee-b870-49b2-8500-06f6529c270c"). InnerVolumeSpecName "kube-api-access-w6dfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.137016 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/366c0aee-b870-49b2-8500-06f6529c270c-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "366c0aee-b870-49b2-8500-06f6529c270c" (UID: "366c0aee-b870-49b2-8500-06f6529c270c"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.138600 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/366c0aee-b870-49b2-8500-06f6529c270c-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "366c0aee-b870-49b2-8500-06f6529c270c" (UID: "366c0aee-b870-49b2-8500-06f6529c270c"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.140992 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "366c0aee-b870-49b2-8500-06f6529c270c" (UID: "366c0aee-b870-49b2-8500-06f6529c270c"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.158626 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/366c0aee-b870-49b2-8500-06f6529c270c-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "366c0aee-b870-49b2-8500-06f6529c270c" (UID: "366c0aee-b870-49b2-8500-06f6529c270c"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.229561 4718 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/366c0aee-b870-49b2-8500-06f6529c270c-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.229608 4718 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/366c0aee-b870-49b2-8500-06f6529c270c-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.229623 4718 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/366c0aee-b870-49b2-8500-06f6529c270c-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.229634 4718 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/366c0aee-b870-49b2-8500-06f6529c270c-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.229663 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6dfj\" (UniqueName: \"kubernetes.io/projected/366c0aee-b870-49b2-8500-06f6529c270c-kube-api-access-w6dfj\") on node \"crc\" DevicePath \"\"" Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.422302 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hmshq"] Jan 23 16:22:41 crc kubenswrapper[4718]: I0123 16:22:41.427775 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hmshq"] Jan 23 16:22:43 crc kubenswrapper[4718]: I0123 16:22:43.152797 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="366c0aee-b870-49b2-8500-06f6529c270c" path="/var/lib/kubelet/pods/366c0aee-b870-49b2-8500-06f6529c270c/volumes" Jan 23 16:22:46 crc kubenswrapper[4718]: I0123 16:22:46.053989 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:46 crc kubenswrapper[4718]: I0123 16:22:46.064804 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" Jan 23 16:22:58 crc kubenswrapper[4718]: I0123 16:22:58.876217 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:22:58 crc kubenswrapper[4718]: I0123 16:22:58.877838 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.193244 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-lt6zb" podUID="0893e7ff-b1d9-4227-ae44-a873d8355a70" containerName="console" containerID="cri-o://eb079f66ddc0992d9bfe2a7a428705819d273ba55a945c90a162c673e93946f7" gracePeriod=15 Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.588649 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-lt6zb_0893e7ff-b1d9-4227-ae44-a873d8355a70/console/0.log" Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.589010 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.671087 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-oauth-serving-cert\") pod \"0893e7ff-b1d9-4227-ae44-a873d8355a70\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.671503 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0893e7ff-b1d9-4227-ae44-a873d8355a70-console-oauth-config\") pod \"0893e7ff-b1d9-4227-ae44-a873d8355a70\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.671534 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfwsj\" (UniqueName: \"kubernetes.io/projected/0893e7ff-b1d9-4227-ae44-a873d8355a70-kube-api-access-vfwsj\") pod \"0893e7ff-b1d9-4227-ae44-a873d8355a70\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.671750 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-console-config\") pod \"0893e7ff-b1d9-4227-ae44-a873d8355a70\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.671830 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0893e7ff-b1d9-4227-ae44-a873d8355a70-console-serving-cert\") pod \"0893e7ff-b1d9-4227-ae44-a873d8355a70\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.671871 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-service-ca\") pod \"0893e7ff-b1d9-4227-ae44-a873d8355a70\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.671895 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-trusted-ca-bundle\") pod \"0893e7ff-b1d9-4227-ae44-a873d8355a70\" (UID: \"0893e7ff-b1d9-4227-ae44-a873d8355a70\") " Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.672383 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0893e7ff-b1d9-4227-ae44-a873d8355a70" (UID: "0893e7ff-b1d9-4227-ae44-a873d8355a70"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.672805 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0893e7ff-b1d9-4227-ae44-a873d8355a70" (UID: "0893e7ff-b1d9-4227-ae44-a873d8355a70"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.672889 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-console-config" (OuterVolumeSpecName: "console-config") pod "0893e7ff-b1d9-4227-ae44-a873d8355a70" (UID: "0893e7ff-b1d9-4227-ae44-a873d8355a70"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.673176 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-service-ca" (OuterVolumeSpecName: "service-ca") pod "0893e7ff-b1d9-4227-ae44-a873d8355a70" (UID: "0893e7ff-b1d9-4227-ae44-a873d8355a70"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.685918 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0893e7ff-b1d9-4227-ae44-a873d8355a70-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0893e7ff-b1d9-4227-ae44-a873d8355a70" (UID: "0893e7ff-b1d9-4227-ae44-a873d8355a70"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.686142 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0893e7ff-b1d9-4227-ae44-a873d8355a70-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0893e7ff-b1d9-4227-ae44-a873d8355a70" (UID: "0893e7ff-b1d9-4227-ae44-a873d8355a70"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.687455 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0893e7ff-b1d9-4227-ae44-a873d8355a70-kube-api-access-vfwsj" (OuterVolumeSpecName: "kube-api-access-vfwsj") pod "0893e7ff-b1d9-4227-ae44-a873d8355a70" (UID: "0893e7ff-b1d9-4227-ae44-a873d8355a70"). InnerVolumeSpecName "kube-api-access-vfwsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.773702 4718 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.774011 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.774076 4718 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.774133 4718 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0893e7ff-b1d9-4227-ae44-a873d8355a70-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.774200 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfwsj\" (UniqueName: \"kubernetes.io/projected/0893e7ff-b1d9-4227-ae44-a873d8355a70-kube-api-access-vfwsj\") on node \"crc\" DevicePath \"\"" Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.774260 4718 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0893e7ff-b1d9-4227-ae44-a873d8355a70-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:22:59 crc kubenswrapper[4718]: I0123 16:22:59.774322 4718 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0893e7ff-b1d9-4227-ae44-a873d8355a70-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:23:00 crc kubenswrapper[4718]: I0123 16:23:00.247821 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-lt6zb_0893e7ff-b1d9-4227-ae44-a873d8355a70/console/0.log" Jan 23 16:23:00 crc kubenswrapper[4718]: I0123 16:23:00.247898 4718 generic.go:334] "Generic (PLEG): container finished" podID="0893e7ff-b1d9-4227-ae44-a873d8355a70" containerID="eb079f66ddc0992d9bfe2a7a428705819d273ba55a945c90a162c673e93946f7" exitCode=2 Jan 23 16:23:00 crc kubenswrapper[4718]: I0123 16:23:00.247938 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lt6zb" event={"ID":"0893e7ff-b1d9-4227-ae44-a873d8355a70","Type":"ContainerDied","Data":"eb079f66ddc0992d9bfe2a7a428705819d273ba55a945c90a162c673e93946f7"} Jan 23 16:23:00 crc kubenswrapper[4718]: I0123 16:23:00.247976 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lt6zb" event={"ID":"0893e7ff-b1d9-4227-ae44-a873d8355a70","Type":"ContainerDied","Data":"9bad95bc040606a183f3b3dc374948829e974261cb26d15863d242647571f23e"} Jan 23 16:23:00 crc kubenswrapper[4718]: I0123 16:23:00.248005 4718 scope.go:117] "RemoveContainer" containerID="eb079f66ddc0992d9bfe2a7a428705819d273ba55a945c90a162c673e93946f7" Jan 23 16:23:00 crc kubenswrapper[4718]: I0123 16:23:00.248067 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lt6zb" Jan 23 16:23:00 crc kubenswrapper[4718]: I0123 16:23:00.275114 4718 scope.go:117] "RemoveContainer" containerID="eb079f66ddc0992d9bfe2a7a428705819d273ba55a945c90a162c673e93946f7" Jan 23 16:23:00 crc kubenswrapper[4718]: E0123 16:23:00.275999 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb079f66ddc0992d9bfe2a7a428705819d273ba55a945c90a162c673e93946f7\": container with ID starting with eb079f66ddc0992d9bfe2a7a428705819d273ba55a945c90a162c673e93946f7 not found: ID does not exist" containerID="eb079f66ddc0992d9bfe2a7a428705819d273ba55a945c90a162c673e93946f7" Jan 23 16:23:00 crc kubenswrapper[4718]: I0123 16:23:00.276102 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb079f66ddc0992d9bfe2a7a428705819d273ba55a945c90a162c673e93946f7"} err="failed to get container status \"eb079f66ddc0992d9bfe2a7a428705819d273ba55a945c90a162c673e93946f7\": rpc error: code = NotFound desc = could not find container \"eb079f66ddc0992d9bfe2a7a428705819d273ba55a945c90a162c673e93946f7\": container with ID starting with eb079f66ddc0992d9bfe2a7a428705819d273ba55a945c90a162c673e93946f7 not found: ID does not exist" Jan 23 16:23:00 crc kubenswrapper[4718]: I0123 16:23:00.312551 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-lt6zb"] Jan 23 16:23:00 crc kubenswrapper[4718]: I0123 16:23:00.323670 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-lt6zb"] Jan 23 16:23:01 crc kubenswrapper[4718]: I0123 16:23:01.156727 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0893e7ff-b1d9-4227-ae44-a873d8355a70" path="/var/lib/kubelet/pods/0893e7ff-b1d9-4227-ae44-a873d8355a70/volumes" Jan 23 16:23:07 crc kubenswrapper[4718]: I0123 16:23:07.018789 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:23:07 crc kubenswrapper[4718]: I0123 16:23:07.059693 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:23:07 crc kubenswrapper[4718]: I0123 16:23:07.344248 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 23 16:23:28 crc kubenswrapper[4718]: I0123 16:23:28.875424 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:23:28 crc kubenswrapper[4718]: I0123 16:23:28.876260 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.546305 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5c466d7564-7dkhr"] Jan 23 16:23:50 crc kubenswrapper[4718]: E0123 16:23:50.547236 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="366c0aee-b870-49b2-8500-06f6529c270c" containerName="registry" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.547255 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="366c0aee-b870-49b2-8500-06f6529c270c" containerName="registry" Jan 23 16:23:50 crc kubenswrapper[4718]: E0123 16:23:50.547270 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0893e7ff-b1d9-4227-ae44-a873d8355a70" containerName="console" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.547279 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0893e7ff-b1d9-4227-ae44-a873d8355a70" containerName="console" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.547462 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0893e7ff-b1d9-4227-ae44-a873d8355a70" containerName="console" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.547490 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="366c0aee-b870-49b2-8500-06f6529c270c" containerName="registry" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.548222 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.554486 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-service-ca\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.554799 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-console-config\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.555088 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-oauth-serving-cert\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.555304 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z25g\" (UniqueName: \"kubernetes.io/projected/e61d3adb-1166-4fe7-b38d-efbbf01446ea-kube-api-access-8z25g\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.555499 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-trusted-ca-bundle\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.555756 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e61d3adb-1166-4fe7-b38d-efbbf01446ea-console-oauth-config\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.555960 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e61d3adb-1166-4fe7-b38d-efbbf01446ea-console-serving-cert\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.632536 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c466d7564-7dkhr"] Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.657479 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-service-ca\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.657545 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-console-config\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.657620 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-oauth-serving-cert\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.657684 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z25g\" (UniqueName: \"kubernetes.io/projected/e61d3adb-1166-4fe7-b38d-efbbf01446ea-kube-api-access-8z25g\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.657745 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-trusted-ca-bundle\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.657774 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e61d3adb-1166-4fe7-b38d-efbbf01446ea-console-oauth-config\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.657797 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e61d3adb-1166-4fe7-b38d-efbbf01446ea-console-serving-cert\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.659498 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-oauth-serving-cert\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.660032 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-service-ca\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.661072 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-console-config\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.662064 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-trusted-ca-bundle\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.667243 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e61d3adb-1166-4fe7-b38d-efbbf01446ea-console-oauth-config\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.669778 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e61d3adb-1166-4fe7-b38d-efbbf01446ea-console-serving-cert\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.692892 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z25g\" (UniqueName: \"kubernetes.io/projected/e61d3adb-1166-4fe7-b38d-efbbf01446ea-kube-api-access-8z25g\") pod \"console-5c466d7564-7dkhr\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:50 crc kubenswrapper[4718]: I0123 16:23:50.873740 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:23:51 crc kubenswrapper[4718]: I0123 16:23:51.117594 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c466d7564-7dkhr"] Jan 23 16:23:51 crc kubenswrapper[4718]: I0123 16:23:51.674721 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c466d7564-7dkhr" event={"ID":"e61d3adb-1166-4fe7-b38d-efbbf01446ea","Type":"ContainerStarted","Data":"67b17f6cb013b001ee0896b5d0004543934916a49a6827d678cf260828a6b6f4"} Jan 23 16:23:51 crc kubenswrapper[4718]: I0123 16:23:51.674795 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c466d7564-7dkhr" event={"ID":"e61d3adb-1166-4fe7-b38d-efbbf01446ea","Type":"ContainerStarted","Data":"bacb5c2f17e6d2a6012b51d92968c4cf42b6294c16ddaafc7f0589d985d685a7"} Jan 23 16:23:51 crc kubenswrapper[4718]: I0123 16:23:51.728690 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5c466d7564-7dkhr" podStartSLOduration=1.728606568 podStartE2EDuration="1.728606568s" podCreationTimestamp="2026-01-23 16:23:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:23:51.716739691 +0000 UTC m=+432.863981722" watchObservedRunningTime="2026-01-23 16:23:51.728606568 +0000 UTC m=+432.875848599" Jan 23 16:23:58 crc kubenswrapper[4718]: I0123 16:23:58.875991 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:23:58 crc kubenswrapper[4718]: I0123 16:23:58.876966 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:23:58 crc kubenswrapper[4718]: I0123 16:23:58.877044 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:23:58 crc kubenswrapper[4718]: I0123 16:23:58.878784 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68ba7028c895a9368e5bfd080b533a77a260dd92bd96940d5d645457904a6833"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 16:23:58 crc kubenswrapper[4718]: I0123 16:23:58.878888 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://68ba7028c895a9368e5bfd080b533a77a260dd92bd96940d5d645457904a6833" gracePeriod=600 Jan 23 16:23:59 crc kubenswrapper[4718]: I0123 16:23:59.753605 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="68ba7028c895a9368e5bfd080b533a77a260dd92bd96940d5d645457904a6833" exitCode=0 Jan 23 16:23:59 crc kubenswrapper[4718]: I0123 16:23:59.753709 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"68ba7028c895a9368e5bfd080b533a77a260dd92bd96940d5d645457904a6833"} Jan 23 16:23:59 crc kubenswrapper[4718]: I0123 16:23:59.754037 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"7d6fc078511f90cfb7ae2b356622e186c6cc3d8dadc1f9dd98c3eb3e0635e278"} Jan 23 16:23:59 crc kubenswrapper[4718]: I0123 16:23:59.754073 4718 scope.go:117] "RemoveContainer" containerID="cb6a0bed8bcbc3f16cb3df8dbee72a17964837cc851622e1a459b38b5777a012" Jan 23 16:24:00 crc kubenswrapper[4718]: I0123 16:24:00.874649 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:24:00 crc kubenswrapper[4718]: I0123 16:24:00.875010 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:24:00 crc kubenswrapper[4718]: I0123 16:24:00.882478 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:24:01 crc kubenswrapper[4718]: I0123 16:24:01.775203 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:24:01 crc kubenswrapper[4718]: I0123 16:24:01.859328 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-8449bd689b-2nbw2"] Jan 23 16:24:26 crc kubenswrapper[4718]: I0123 16:24:26.923468 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-8449bd689b-2nbw2" podUID="2710faa3-2347-4d77-a3b1-64df149d94d4" containerName="console" containerID="cri-o://92f48f14d93b5cd594f7b64d03684b686a7d7c4c27200a56c2cb9272b472cc61" gracePeriod=15 Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.314592 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-8449bd689b-2nbw2_2710faa3-2347-4d77-a3b1-64df149d94d4/console/0.log" Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.314911 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.495302 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-service-ca\") pod \"2710faa3-2347-4d77-a3b1-64df149d94d4\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.495424 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-oauth-serving-cert\") pod \"2710faa3-2347-4d77-a3b1-64df149d94d4\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.495457 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-console-config\") pod \"2710faa3-2347-4d77-a3b1-64df149d94d4\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.495506 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8xg6\" (UniqueName: \"kubernetes.io/projected/2710faa3-2347-4d77-a3b1-64df149d94d4-kube-api-access-j8xg6\") pod \"2710faa3-2347-4d77-a3b1-64df149d94d4\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.495563 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2710faa3-2347-4d77-a3b1-64df149d94d4-console-oauth-config\") pod \"2710faa3-2347-4d77-a3b1-64df149d94d4\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.495671 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2710faa3-2347-4d77-a3b1-64df149d94d4-console-serving-cert\") pod \"2710faa3-2347-4d77-a3b1-64df149d94d4\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.495709 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-trusted-ca-bundle\") pod \"2710faa3-2347-4d77-a3b1-64df149d94d4\" (UID: \"2710faa3-2347-4d77-a3b1-64df149d94d4\") " Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.496542 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2710faa3-2347-4d77-a3b1-64df149d94d4" (UID: "2710faa3-2347-4d77-a3b1-64df149d94d4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.496562 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2710faa3-2347-4d77-a3b1-64df149d94d4" (UID: "2710faa3-2347-4d77-a3b1-64df149d94d4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.496588 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-console-config" (OuterVolumeSpecName: "console-config") pod "2710faa3-2347-4d77-a3b1-64df149d94d4" (UID: "2710faa3-2347-4d77-a3b1-64df149d94d4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.496623 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-service-ca" (OuterVolumeSpecName: "service-ca") pod "2710faa3-2347-4d77-a3b1-64df149d94d4" (UID: "2710faa3-2347-4d77-a3b1-64df149d94d4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.497138 4718 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.497172 4718 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.497191 4718 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.497208 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2710faa3-2347-4d77-a3b1-64df149d94d4-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.503093 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2710faa3-2347-4d77-a3b1-64df149d94d4-kube-api-access-j8xg6" (OuterVolumeSpecName: "kube-api-access-j8xg6") pod "2710faa3-2347-4d77-a3b1-64df149d94d4" (UID: "2710faa3-2347-4d77-a3b1-64df149d94d4"). InnerVolumeSpecName "kube-api-access-j8xg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.505746 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2710faa3-2347-4d77-a3b1-64df149d94d4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2710faa3-2347-4d77-a3b1-64df149d94d4" (UID: "2710faa3-2347-4d77-a3b1-64df149d94d4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.505802 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2710faa3-2347-4d77-a3b1-64df149d94d4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2710faa3-2347-4d77-a3b1-64df149d94d4" (UID: "2710faa3-2347-4d77-a3b1-64df149d94d4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.599222 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8xg6\" (UniqueName: \"kubernetes.io/projected/2710faa3-2347-4d77-a3b1-64df149d94d4-kube-api-access-j8xg6\") on node \"crc\" DevicePath \"\"" Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.599275 4718 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2710faa3-2347-4d77-a3b1-64df149d94d4-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.599293 4718 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2710faa3-2347-4d77-a3b1-64df149d94d4-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.980701 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-8449bd689b-2nbw2_2710faa3-2347-4d77-a3b1-64df149d94d4/console/0.log" Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.981046 4718 generic.go:334] "Generic (PLEG): container finished" podID="2710faa3-2347-4d77-a3b1-64df149d94d4" containerID="92f48f14d93b5cd594f7b64d03684b686a7d7c4c27200a56c2cb9272b472cc61" exitCode=2 Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.981087 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8449bd689b-2nbw2" event={"ID":"2710faa3-2347-4d77-a3b1-64df149d94d4","Type":"ContainerDied","Data":"92f48f14d93b5cd594f7b64d03684b686a7d7c4c27200a56c2cb9272b472cc61"} Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.981126 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8449bd689b-2nbw2" event={"ID":"2710faa3-2347-4d77-a3b1-64df149d94d4","Type":"ContainerDied","Data":"10fddb3e403c10f36c84b5c20e789076caf7a783886de213589a6f762d6b2675"} Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.981137 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8449bd689b-2nbw2" Jan 23 16:24:27 crc kubenswrapper[4718]: I0123 16:24:27.981148 4718 scope.go:117] "RemoveContainer" containerID="92f48f14d93b5cd594f7b64d03684b686a7d7c4c27200a56c2cb9272b472cc61" Jan 23 16:24:28 crc kubenswrapper[4718]: I0123 16:24:28.004787 4718 scope.go:117] "RemoveContainer" containerID="92f48f14d93b5cd594f7b64d03684b686a7d7c4c27200a56c2cb9272b472cc61" Jan 23 16:24:28 crc kubenswrapper[4718]: E0123 16:24:28.007209 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92f48f14d93b5cd594f7b64d03684b686a7d7c4c27200a56c2cb9272b472cc61\": container with ID starting with 92f48f14d93b5cd594f7b64d03684b686a7d7c4c27200a56c2cb9272b472cc61 not found: ID does not exist" containerID="92f48f14d93b5cd594f7b64d03684b686a7d7c4c27200a56c2cb9272b472cc61" Jan 23 16:24:28 crc kubenswrapper[4718]: I0123 16:24:28.007264 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92f48f14d93b5cd594f7b64d03684b686a7d7c4c27200a56c2cb9272b472cc61"} err="failed to get container status \"92f48f14d93b5cd594f7b64d03684b686a7d7c4c27200a56c2cb9272b472cc61\": rpc error: code = NotFound desc = could not find container \"92f48f14d93b5cd594f7b64d03684b686a7d7c4c27200a56c2cb9272b472cc61\": container with ID starting with 92f48f14d93b5cd594f7b64d03684b686a7d7c4c27200a56c2cb9272b472cc61 not found: ID does not exist" Jan 23 16:24:28 crc kubenswrapper[4718]: I0123 16:24:28.032700 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-8449bd689b-2nbw2"] Jan 23 16:24:28 crc kubenswrapper[4718]: I0123 16:24:28.039899 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-8449bd689b-2nbw2"] Jan 23 16:24:29 crc kubenswrapper[4718]: I0123 16:24:29.154741 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2710faa3-2347-4d77-a3b1-64df149d94d4" path="/var/lib/kubelet/pods/2710faa3-2347-4d77-a3b1-64df149d94d4/volumes" Jan 23 16:26:11 crc kubenswrapper[4718]: I0123 16:26:11.541459 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm"] Jan 23 16:26:11 crc kubenswrapper[4718]: E0123 16:26:11.542716 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2710faa3-2347-4d77-a3b1-64df149d94d4" containerName="console" Jan 23 16:26:11 crc kubenswrapper[4718]: I0123 16:26:11.542738 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2710faa3-2347-4d77-a3b1-64df149d94d4" containerName="console" Jan 23 16:26:11 crc kubenswrapper[4718]: I0123 16:26:11.542968 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2710faa3-2347-4d77-a3b1-64df149d94d4" containerName="console" Jan 23 16:26:11 crc kubenswrapper[4718]: I0123 16:26:11.544397 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm" Jan 23 16:26:11 crc kubenswrapper[4718]: I0123 16:26:11.548595 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 16:26:11 crc kubenswrapper[4718]: I0123 16:26:11.560252 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm"] Jan 23 16:26:11 crc kubenswrapper[4718]: I0123 16:26:11.694221 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb6lk\" (UniqueName: \"kubernetes.io/projected/6bc79f32-12bc-4d6d-ad7d-ebe468f6e164-kube-api-access-bb6lk\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm\" (UID: \"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm" Jan 23 16:26:11 crc kubenswrapper[4718]: I0123 16:26:11.694291 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bc79f32-12bc-4d6d-ad7d-ebe468f6e164-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm\" (UID: \"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm" Jan 23 16:26:11 crc kubenswrapper[4718]: I0123 16:26:11.694324 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bc79f32-12bc-4d6d-ad7d-ebe468f6e164-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm\" (UID: \"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm" Jan 23 16:26:11 crc kubenswrapper[4718]: I0123 16:26:11.795678 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb6lk\" (UniqueName: \"kubernetes.io/projected/6bc79f32-12bc-4d6d-ad7d-ebe468f6e164-kube-api-access-bb6lk\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm\" (UID: \"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm" Jan 23 16:26:11 crc kubenswrapper[4718]: I0123 16:26:11.795730 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bc79f32-12bc-4d6d-ad7d-ebe468f6e164-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm\" (UID: \"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm" Jan 23 16:26:11 crc kubenswrapper[4718]: I0123 16:26:11.795757 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bc79f32-12bc-4d6d-ad7d-ebe468f6e164-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm\" (UID: \"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm" Jan 23 16:26:11 crc kubenswrapper[4718]: I0123 16:26:11.796318 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bc79f32-12bc-4d6d-ad7d-ebe468f6e164-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm\" (UID: \"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm" Jan 23 16:26:11 crc kubenswrapper[4718]: I0123 16:26:11.796470 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bc79f32-12bc-4d6d-ad7d-ebe468f6e164-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm\" (UID: \"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm" Jan 23 16:26:11 crc kubenswrapper[4718]: I0123 16:26:11.818167 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb6lk\" (UniqueName: \"kubernetes.io/projected/6bc79f32-12bc-4d6d-ad7d-ebe468f6e164-kube-api-access-bb6lk\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm\" (UID: \"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm" Jan 23 16:26:11 crc kubenswrapper[4718]: I0123 16:26:11.868739 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm" Jan 23 16:26:12 crc kubenswrapper[4718]: I0123 16:26:12.143925 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm"] Jan 23 16:26:12 crc kubenswrapper[4718]: I0123 16:26:12.887042 4718 generic.go:334] "Generic (PLEG): container finished" podID="6bc79f32-12bc-4d6d-ad7d-ebe468f6e164" containerID="51756fe954974d36a8ee518db62d25fc91d4bbee2ef33ea948f745823dcbea4a" exitCode=0 Jan 23 16:26:12 crc kubenswrapper[4718]: I0123 16:26:12.887554 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm" event={"ID":"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164","Type":"ContainerDied","Data":"51756fe954974d36a8ee518db62d25fc91d4bbee2ef33ea948f745823dcbea4a"} Jan 23 16:26:12 crc kubenswrapper[4718]: I0123 16:26:12.887617 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm" event={"ID":"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164","Type":"ContainerStarted","Data":"93a02843f0cb2505c0d1fa32e032b04e8ed4b4b9172e807e9231614099120b8a"} Jan 23 16:26:12 crc kubenswrapper[4718]: I0123 16:26:12.890462 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 16:26:14 crc kubenswrapper[4718]: I0123 16:26:14.906514 4718 generic.go:334] "Generic (PLEG): container finished" podID="6bc79f32-12bc-4d6d-ad7d-ebe468f6e164" containerID="729991b125aedce5969b64c7579037805749150b2898729d4a54c27eaa93188d" exitCode=0 Jan 23 16:26:14 crc kubenswrapper[4718]: I0123 16:26:14.906682 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm" event={"ID":"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164","Type":"ContainerDied","Data":"729991b125aedce5969b64c7579037805749150b2898729d4a54c27eaa93188d"} Jan 23 16:26:15 crc kubenswrapper[4718]: I0123 16:26:15.916882 4718 generic.go:334] "Generic (PLEG): container finished" podID="6bc79f32-12bc-4d6d-ad7d-ebe468f6e164" containerID="ceb50f8a99479e8cac228deee201872e85c8486c00d2397b5e651fdc82589d00" exitCode=0 Jan 23 16:26:15 crc kubenswrapper[4718]: I0123 16:26:15.917394 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm" event={"ID":"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164","Type":"ContainerDied","Data":"ceb50f8a99479e8cac228deee201872e85c8486c00d2397b5e651fdc82589d00"} Jan 23 16:26:17 crc kubenswrapper[4718]: I0123 16:26:17.264147 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm" Jan 23 16:26:17 crc kubenswrapper[4718]: I0123 16:26:17.291071 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb6lk\" (UniqueName: \"kubernetes.io/projected/6bc79f32-12bc-4d6d-ad7d-ebe468f6e164-kube-api-access-bb6lk\") pod \"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164\" (UID: \"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164\") " Jan 23 16:26:17 crc kubenswrapper[4718]: I0123 16:26:17.291154 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bc79f32-12bc-4d6d-ad7d-ebe468f6e164-bundle\") pod \"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164\" (UID: \"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164\") " Jan 23 16:26:17 crc kubenswrapper[4718]: I0123 16:26:17.291337 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bc79f32-12bc-4d6d-ad7d-ebe468f6e164-util\") pod \"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164\" (UID: \"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164\") " Jan 23 16:26:17 crc kubenswrapper[4718]: I0123 16:26:17.293709 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bc79f32-12bc-4d6d-ad7d-ebe468f6e164-bundle" (OuterVolumeSpecName: "bundle") pod "6bc79f32-12bc-4d6d-ad7d-ebe468f6e164" (UID: "6bc79f32-12bc-4d6d-ad7d-ebe468f6e164"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:26:17 crc kubenswrapper[4718]: I0123 16:26:17.299225 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc79f32-12bc-4d6d-ad7d-ebe468f6e164-kube-api-access-bb6lk" (OuterVolumeSpecName: "kube-api-access-bb6lk") pod "6bc79f32-12bc-4d6d-ad7d-ebe468f6e164" (UID: "6bc79f32-12bc-4d6d-ad7d-ebe468f6e164"). InnerVolumeSpecName "kube-api-access-bb6lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:26:17 crc kubenswrapper[4718]: I0123 16:26:17.305689 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bc79f32-12bc-4d6d-ad7d-ebe468f6e164-util" (OuterVolumeSpecName: "util") pod "6bc79f32-12bc-4d6d-ad7d-ebe468f6e164" (UID: "6bc79f32-12bc-4d6d-ad7d-ebe468f6e164"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:26:17 crc kubenswrapper[4718]: I0123 16:26:17.394000 4718 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bc79f32-12bc-4d6d-ad7d-ebe468f6e164-util\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:17 crc kubenswrapper[4718]: I0123 16:26:17.394054 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb6lk\" (UniqueName: \"kubernetes.io/projected/6bc79f32-12bc-4d6d-ad7d-ebe468f6e164-kube-api-access-bb6lk\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:17 crc kubenswrapper[4718]: I0123 16:26:17.394078 4718 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bc79f32-12bc-4d6d-ad7d-ebe468f6e164-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:17 crc kubenswrapper[4718]: I0123 16:26:17.937033 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm" event={"ID":"6bc79f32-12bc-4d6d-ad7d-ebe468f6e164","Type":"ContainerDied","Data":"93a02843f0cb2505c0d1fa32e032b04e8ed4b4b9172e807e9231614099120b8a"} Jan 23 16:26:17 crc kubenswrapper[4718]: I0123 16:26:17.937112 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93a02843f0cb2505c0d1fa32e032b04e8ed4b4b9172e807e9231614099120b8a" Jan 23 16:26:17 crc kubenswrapper[4718]: I0123 16:26:17.937204 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm" Jan 23 16:26:22 crc kubenswrapper[4718]: I0123 16:26:22.644728 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5qnds"] Jan 23 16:26:22 crc kubenswrapper[4718]: I0123 16:26:22.645455 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovn-controller" containerID="cri-o://ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264" gracePeriod=30 Jan 23 16:26:22 crc kubenswrapper[4718]: I0123 16:26:22.645580 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="northd" containerID="cri-o://6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156" gracePeriod=30 Jan 23 16:26:22 crc kubenswrapper[4718]: I0123 16:26:22.645585 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="nbdb" containerID="cri-o://ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da" gracePeriod=30 Jan 23 16:26:22 crc kubenswrapper[4718]: I0123 16:26:22.645646 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovn-acl-logging" containerID="cri-o://86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6" gracePeriod=30 Jan 23 16:26:22 crc kubenswrapper[4718]: I0123 16:26:22.645657 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="kube-rbac-proxy-node" containerID="cri-o://d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521" gracePeriod=30 Jan 23 16:26:22 crc kubenswrapper[4718]: I0123 16:26:22.645721 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="sbdb" containerID="cri-o://3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9" gracePeriod=30 Jan 23 16:26:22 crc kubenswrapper[4718]: I0123 16:26:22.645833 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19" gracePeriod=30 Jan 23 16:26:22 crc kubenswrapper[4718]: I0123 16:26:22.697592 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovnkube-controller" containerID="cri-o://6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462" gracePeriod=30 Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.019480 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tb79v_d2a07769-1921-4484-b1cd-28b23487bb39/kube-multus/2.log" Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.021371 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tb79v_d2a07769-1921-4484-b1cd-28b23487bb39/kube-multus/1.log" Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.021414 4718 generic.go:334] "Generic (PLEG): container finished" podID="d2a07769-1921-4484-b1cd-28b23487bb39" containerID="057f3efdeba4092338077df8e639b0ee0cb35cf8330d07c93524611cb0317bed" exitCode=2 Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.021494 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tb79v" event={"ID":"d2a07769-1921-4484-b1cd-28b23487bb39","Type":"ContainerDied","Data":"057f3efdeba4092338077df8e639b0ee0cb35cf8330d07c93524611cb0317bed"} Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.021536 4718 scope.go:117] "RemoveContainer" containerID="20405d329c7193cd8e7cb3a92d8ead168665e11059b3a658afb9533dad110282" Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.023001 4718 scope.go:117] "RemoveContainer" containerID="057f3efdeba4092338077df8e639b0ee0cb35cf8330d07c93524611cb0317bed" Jan 23 16:26:23 crc kubenswrapper[4718]: E0123 16:26:23.023474 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-tb79v_openshift-multus(d2a07769-1921-4484-b1cd-28b23487bb39)\"" pod="openshift-multus/multus-tb79v" podUID="d2a07769-1921-4484-b1cd-28b23487bb39" Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.036827 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovnkube-controller/3.log" Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.047139 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovn-acl-logging/0.log" Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.051380 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovn-controller/0.log" Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.052497 4718 generic.go:334] "Generic (PLEG): container finished" podID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerID="6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462" exitCode=0 Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.052548 4718 generic.go:334] "Generic (PLEG): container finished" podID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerID="3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9" exitCode=0 Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.052558 4718 generic.go:334] "Generic (PLEG): container finished" podID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerID="6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156" exitCode=0 Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.052566 4718 generic.go:334] "Generic (PLEG): container finished" podID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerID="86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6" exitCode=143 Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.052578 4718 generic.go:334] "Generic (PLEG): container finished" podID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerID="ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264" exitCode=143 Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.052605 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerDied","Data":"6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462"} Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.052658 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerDied","Data":"3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9"} Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.052669 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerDied","Data":"6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156"} Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.052678 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerDied","Data":"86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6"} Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.052688 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerDied","Data":"ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264"} Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.064398 4718 scope.go:117] "RemoveContainer" containerID="35c1e91f9ba73704b5e35e659d14f7e134e56bf945e8347009d81cbdf3fd1d32" Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.946106 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovn-acl-logging/0.log" Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.946879 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovn-controller/0.log" Jan 23 16:26:23 crc kubenswrapper[4718]: I0123 16:26:23.947270 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.012079 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-log-socket\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.012166 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-run-ovn\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.012204 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-log-socket" (OuterVolumeSpecName: "log-socket") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.012246 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.012282 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-cni-bin\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.012320 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-run-systemd\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.012375 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013128 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-systemd-units\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013155 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-kubelet\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013186 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013194 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013219 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsnx5\" (UniqueName: \"kubernetes.io/projected/4985ab62-43a5-4fd8-919c-f9db2eea18f7-kube-api-access-wsnx5\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013269 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-cni-netd\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013293 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-run-ovn-kubernetes\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013316 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-run-netns\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013219 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013355 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4985ab62-43a5-4fd8-919c-f9db2eea18f7-ovnkube-config\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013230 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013335 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013381 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4985ab62-43a5-4fd8-919c-f9db2eea18f7-ovn-node-metrics-cert\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013417 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-slash\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013433 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-run-openvswitch\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013453 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-var-lib-openvswitch\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013439 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013494 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-node-log\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013513 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-etc-openvswitch\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013484 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013551 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013550 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4985ab62-43a5-4fd8-919c-f9db2eea18f7-env-overrides\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013528 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013603 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013610 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-node-log" (OuterVolumeSpecName: "node-log") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013659 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4985ab62-43a5-4fd8-919c-f9db2eea18f7-ovnkube-script-lib\") pod \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\" (UID: \"4985ab62-43a5-4fd8-919c-f9db2eea18f7\") " Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.013621 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-slash" (OuterVolumeSpecName: "host-slash") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014097 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4985ab62-43a5-4fd8-919c-f9db2eea18f7-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014117 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4985ab62-43a5-4fd8-919c-f9db2eea18f7-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014251 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4985ab62-43a5-4fd8-919c-f9db2eea18f7-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014492 4718 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014515 4718 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014528 4718 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014543 4718 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014553 4718 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014563 4718 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014572 4718 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4985ab62-43a5-4fd8-919c-f9db2eea18f7-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014581 4718 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-slash\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014592 4718 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014604 4718 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014615 4718 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-node-log\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014649 4718 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014659 4718 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4985ab62-43a5-4fd8-919c-f9db2eea18f7-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014667 4718 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4985ab62-43a5-4fd8-919c-f9db2eea18f7-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014675 4718 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-log-socket\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014684 4718 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.014692 4718 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.026008 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4985ab62-43a5-4fd8-919c-f9db2eea18f7-kube-api-access-wsnx5" (OuterVolumeSpecName: "kube-api-access-wsnx5") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "kube-api-access-wsnx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.029900 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4985ab62-43a5-4fd8-919c-f9db2eea18f7-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.046391 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4985ab62-43a5-4fd8-919c-f9db2eea18f7" (UID: "4985ab62-43a5-4fd8-919c-f9db2eea18f7"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.062185 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tb79v_d2a07769-1921-4484-b1cd-28b23487bb39/kube-multus/2.log" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.070514 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovn-acl-logging/0.log" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.071228 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qnds_4985ab62-43a5-4fd8-919c-f9db2eea18f7/ovn-controller/0.log" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.071754 4718 generic.go:334] "Generic (PLEG): container finished" podID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerID="ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da" exitCode=0 Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.071791 4718 generic.go:334] "Generic (PLEG): container finished" podID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerID="d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19" exitCode=0 Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.071808 4718 generic.go:334] "Generic (PLEG): container finished" podID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerID="d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521" exitCode=0 Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.071836 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerDied","Data":"ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da"} Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.071877 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.071916 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerDied","Data":"d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19"} Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.071934 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerDied","Data":"d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521"} Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.071949 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qnds" event={"ID":"4985ab62-43a5-4fd8-919c-f9db2eea18f7","Type":"ContainerDied","Data":"73c5172b533b332ddb7430b2f83bbba202faeb4fa77a0f449a4a300a4dd60eae"} Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.071974 4718 scope.go:117] "RemoveContainer" containerID="6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.093433 4718 scope.go:117] "RemoveContainer" containerID="3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.111137 4718 scope.go:117] "RemoveContainer" containerID="ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.117491 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsnx5\" (UniqueName: \"kubernetes.io/projected/4985ab62-43a5-4fd8-919c-f9db2eea18f7-kube-api-access-wsnx5\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.117530 4718 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4985ab62-43a5-4fd8-919c-f9db2eea18f7-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.117541 4718 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4985ab62-43a5-4fd8-919c-f9db2eea18f7-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.126643 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5qnds"] Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.142810 4718 scope.go:117] "RemoveContainer" containerID="6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160183 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cs66r"] Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.160467 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc79f32-12bc-4d6d-ad7d-ebe468f6e164" containerName="pull" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160481 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc79f32-12bc-4d6d-ad7d-ebe468f6e164" containerName="pull" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.160494 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc79f32-12bc-4d6d-ad7d-ebe468f6e164" containerName="util" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160500 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc79f32-12bc-4d6d-ad7d-ebe468f6e164" containerName="util" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.160510 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="kubecfg-setup" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160518 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="kubecfg-setup" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.160526 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovnkube-controller" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160534 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovnkube-controller" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.160545 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="kube-rbac-proxy-node" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160552 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="kube-rbac-proxy-node" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.160560 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovnkube-controller" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160566 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovnkube-controller" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.160574 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovn-acl-logging" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160581 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovn-acl-logging" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.160588 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc79f32-12bc-4d6d-ad7d-ebe468f6e164" containerName="extract" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160593 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc79f32-12bc-4d6d-ad7d-ebe468f6e164" containerName="extract" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.160604 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="sbdb" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160610 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="sbdb" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.160616 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovnkube-controller" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160621 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovnkube-controller" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.160647 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovnkube-controller" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160654 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovnkube-controller" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.160668 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160674 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.160682 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="northd" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160688 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="northd" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.160697 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="nbdb" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160703 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="nbdb" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.160710 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovn-controller" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160716 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovn-controller" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160848 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="northd" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160862 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bc79f32-12bc-4d6d-ad7d-ebe468f6e164" containerName="extract" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160869 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovn-controller" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160877 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="kube-rbac-proxy-node" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160883 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovnkube-controller" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160894 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160902 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovnkube-controller" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160910 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovn-acl-logging" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160916 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovnkube-controller" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160923 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="nbdb" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160931 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="sbdb" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160941 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovnkube-controller" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.160948 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovnkube-controller" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.161049 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovnkube-controller" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.161056 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" containerName="ovnkube-controller" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.163313 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.167906 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.168162 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.168387 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.169889 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5qnds"] Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.180167 4718 scope.go:117] "RemoveContainer" containerID="d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.219524 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-cni-netd\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.219591 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-node-log\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.219613 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-kubelet\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.219644 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-run-ovn\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.219672 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.219701 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-systemd-units\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.219722 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6d67582c-e13c-49b0-ba38-842183da7019-env-overrides\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.219982 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-slash\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.220062 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-cni-bin\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.220095 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb2kx\" (UniqueName: \"kubernetes.io/projected/6d67582c-e13c-49b0-ba38-842183da7019-kube-api-access-lb2kx\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.220153 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-log-socket\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.220171 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-run-ovn-kubernetes\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.220236 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6d67582c-e13c-49b0-ba38-842183da7019-ovnkube-config\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.220264 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6d67582c-e13c-49b0-ba38-842183da7019-ovnkube-script-lib\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.220306 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-run-openvswitch\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.220324 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-run-netns\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.220394 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6d67582c-e13c-49b0-ba38-842183da7019-ovn-node-metrics-cert\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.220414 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-run-systemd\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.220464 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-var-lib-openvswitch\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.220492 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-etc-openvswitch\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.223843 4718 scope.go:117] "RemoveContainer" containerID="d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.242234 4718 scope.go:117] "RemoveContainer" containerID="86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.269912 4718 scope.go:117] "RemoveContainer" containerID="ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.290585 4718 scope.go:117] "RemoveContainer" containerID="081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.321660 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6d67582c-e13c-49b0-ba38-842183da7019-env-overrides\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.321722 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-slash\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.321745 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-cni-bin\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.321769 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb2kx\" (UniqueName: \"kubernetes.io/projected/6d67582c-e13c-49b0-ba38-842183da7019-kube-api-access-lb2kx\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.321798 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-log-socket\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.321812 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-run-ovn-kubernetes\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.321840 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6d67582c-e13c-49b0-ba38-842183da7019-ovnkube-config\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.321872 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6d67582c-e13c-49b0-ba38-842183da7019-ovnkube-script-lib\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.321900 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-run-netns\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.321917 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-run-openvswitch\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.321923 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-slash\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.321976 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-run-ovn-kubernetes\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322015 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-cni-bin\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322031 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-log-socket\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.321951 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6d67582c-e13c-49b0-ba38-842183da7019-ovn-node-metrics-cert\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322163 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-run-systemd\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322225 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-var-lib-openvswitch\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322269 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-etc-openvswitch\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322325 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-cni-netd\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322414 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-node-log\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322451 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-kubelet\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322490 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-run-ovn\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322542 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322577 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-cni-netd\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322615 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-run-systemd\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322650 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-systemd-units\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322666 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-var-lib-openvswitch\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322666 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6d67582c-e13c-49b0-ba38-842183da7019-ovnkube-config\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322548 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6d67582c-e13c-49b0-ba38-842183da7019-env-overrides\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322802 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-node-log\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322793 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-etc-openvswitch\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322822 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-run-netns\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322833 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-run-ovn\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322836 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6d67582c-e13c-49b0-ba38-842183da7019-ovnkube-script-lib\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322857 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322862 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-host-kubelet\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322882 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-run-openvswitch\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.322884 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d67582c-e13c-49b0-ba38-842183da7019-systemd-units\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.323821 4718 scope.go:117] "RemoveContainer" containerID="6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.324506 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462\": container with ID starting with 6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462 not found: ID does not exist" containerID="6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.324542 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462"} err="failed to get container status \"6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462\": rpc error: code = NotFound desc = could not find container \"6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462\": container with ID starting with 6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.324572 4718 scope.go:117] "RemoveContainer" containerID="3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.324936 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\": container with ID starting with 3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9 not found: ID does not exist" containerID="3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.324957 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9"} err="failed to get container status \"3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\": rpc error: code = NotFound desc = could not find container \"3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\": container with ID starting with 3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.324970 4718 scope.go:117] "RemoveContainer" containerID="ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.325203 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\": container with ID starting with ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da not found: ID does not exist" containerID="ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.325224 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da"} err="failed to get container status \"ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\": rpc error: code = NotFound desc = could not find container \"ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\": container with ID starting with ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.325243 4718 scope.go:117] "RemoveContainer" containerID="6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.325491 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\": container with ID starting with 6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156 not found: ID does not exist" containerID="6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.325508 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156"} err="failed to get container status \"6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\": rpc error: code = NotFound desc = could not find container \"6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\": container with ID starting with 6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.325522 4718 scope.go:117] "RemoveContainer" containerID="d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.325753 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\": container with ID starting with d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19 not found: ID does not exist" containerID="d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.325770 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19"} err="failed to get container status \"d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\": rpc error: code = NotFound desc = could not find container \"d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\": container with ID starting with d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.325784 4718 scope.go:117] "RemoveContainer" containerID="d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.326372 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\": container with ID starting with d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521 not found: ID does not exist" containerID="d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.326410 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521"} err="failed to get container status \"d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\": rpc error: code = NotFound desc = could not find container \"d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\": container with ID starting with d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.326437 4718 scope.go:117] "RemoveContainer" containerID="86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.326826 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6d67582c-e13c-49b0-ba38-842183da7019-ovn-node-metrics-cert\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.329770 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\": container with ID starting with 86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6 not found: ID does not exist" containerID="86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.329823 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6"} err="failed to get container status \"86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\": rpc error: code = NotFound desc = could not find container \"86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\": container with ID starting with 86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.329858 4718 scope.go:117] "RemoveContainer" containerID="ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.335037 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\": container with ID starting with ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264 not found: ID does not exist" containerID="ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.335079 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264"} err="failed to get container status \"ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\": rpc error: code = NotFound desc = could not find container \"ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\": container with ID starting with ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.335100 4718 scope.go:117] "RemoveContainer" containerID="081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48" Jan 23 16:26:24 crc kubenswrapper[4718]: E0123 16:26:24.336218 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\": container with ID starting with 081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48 not found: ID does not exist" containerID="081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.336287 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48"} err="failed to get container status \"081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\": rpc error: code = NotFound desc = could not find container \"081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\": container with ID starting with 081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.336343 4718 scope.go:117] "RemoveContainer" containerID="6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.336744 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462"} err="failed to get container status \"6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462\": rpc error: code = NotFound desc = could not find container \"6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462\": container with ID starting with 6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.336798 4718 scope.go:117] "RemoveContainer" containerID="3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.337117 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9"} err="failed to get container status \"3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\": rpc error: code = NotFound desc = could not find container \"3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\": container with ID starting with 3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.337147 4718 scope.go:117] "RemoveContainer" containerID="ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.337453 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da"} err="failed to get container status \"ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\": rpc error: code = NotFound desc = could not find container \"ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\": container with ID starting with ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.337476 4718 scope.go:117] "RemoveContainer" containerID="6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.337787 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156"} err="failed to get container status \"6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\": rpc error: code = NotFound desc = could not find container \"6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\": container with ID starting with 6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.337816 4718 scope.go:117] "RemoveContainer" containerID="d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.338113 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19"} err="failed to get container status \"d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\": rpc error: code = NotFound desc = could not find container \"d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\": container with ID starting with d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.338142 4718 scope.go:117] "RemoveContainer" containerID="d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.338379 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521"} err="failed to get container status \"d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\": rpc error: code = NotFound desc = could not find container \"d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\": container with ID starting with d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.338405 4718 scope.go:117] "RemoveContainer" containerID="86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.338663 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6"} err="failed to get container status \"86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\": rpc error: code = NotFound desc = could not find container \"86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\": container with ID starting with 86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.338682 4718 scope.go:117] "RemoveContainer" containerID="ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.339503 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264"} err="failed to get container status \"ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\": rpc error: code = NotFound desc = could not find container \"ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\": container with ID starting with ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.339553 4718 scope.go:117] "RemoveContainer" containerID="081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.339868 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48"} err="failed to get container status \"081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\": rpc error: code = NotFound desc = could not find container \"081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\": container with ID starting with 081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.339907 4718 scope.go:117] "RemoveContainer" containerID="6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.340111 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462"} err="failed to get container status \"6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462\": rpc error: code = NotFound desc = could not find container \"6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462\": container with ID starting with 6f9f655fa3b1f57c67c5e3321f83be7655c11f4eb4998435386d4f2b02290462 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.340130 4718 scope.go:117] "RemoveContainer" containerID="3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.340345 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9"} err="failed to get container status \"3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\": rpc error: code = NotFound desc = could not find container \"3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9\": container with ID starting with 3fe27bebf23b364242050e1671a29963ab53a843d340d6ca2de55b1069d736b9 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.340363 4718 scope.go:117] "RemoveContainer" containerID="ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.340530 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da"} err="failed to get container status \"ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\": rpc error: code = NotFound desc = could not find container \"ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da\": container with ID starting with ef1bdf45d02f641cb815f803396b713f6941bd85e1648b9a6f807b9f27d280da not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.340549 4718 scope.go:117] "RemoveContainer" containerID="6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.340725 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156"} err="failed to get container status \"6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\": rpc error: code = NotFound desc = could not find container \"6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156\": container with ID starting with 6fe27064646d0d4cf5c4ae96e2e32a5c4d8b6d0cf6a1c8eef5600b77058c9156 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.340758 4718 scope.go:117] "RemoveContainer" containerID="d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.340960 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19"} err="failed to get container status \"d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\": rpc error: code = NotFound desc = could not find container \"d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19\": container with ID starting with d51dd59a2df3e258583412659a8d0cf7605df65aa8823fd59e79814092e40a19 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.340977 4718 scope.go:117] "RemoveContainer" containerID="d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.342210 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521"} err="failed to get container status \"d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\": rpc error: code = NotFound desc = could not find container \"d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521\": container with ID starting with d15aeea47d470c7f727cbd2f343c923e7ab0ccbea23a8b5209b580fa625d8521 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.342239 4718 scope.go:117] "RemoveContainer" containerID="86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.342923 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6"} err="failed to get container status \"86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\": rpc error: code = NotFound desc = could not find container \"86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6\": container with ID starting with 86f0d82060e83e18a85cfd526a8f9e3f52d5342e065c9de306ea1f98f10bf8f6 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.342954 4718 scope.go:117] "RemoveContainer" containerID="ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.343384 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264"} err="failed to get container status \"ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\": rpc error: code = NotFound desc = could not find container \"ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264\": container with ID starting with ff04f0b133bc1eec3f2278375ffcef0302128a87367c53f46ef5aaed6ed07264 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.343412 4718 scope.go:117] "RemoveContainer" containerID="081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.343745 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48"} err="failed to get container status \"081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\": rpc error: code = NotFound desc = could not find container \"081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48\": container with ID starting with 081f985c00b0f08f7c1ae561e06439911e4482b4398f65ecf58f4a4a445d0a48 not found: ID does not exist" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.361449 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb2kx\" (UniqueName: \"kubernetes.io/projected/6d67582c-e13c-49b0-ba38-842183da7019-kube-api-access-lb2kx\") pod \"ovnkube-node-cs66r\" (UID: \"6d67582c-e13c-49b0-ba38-842183da7019\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:24 crc kubenswrapper[4718]: I0123 16:26:24.491563 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:25 crc kubenswrapper[4718]: I0123 16:26:25.085464 4718 generic.go:334] "Generic (PLEG): container finished" podID="6d67582c-e13c-49b0-ba38-842183da7019" containerID="7490309c4295f736c74e001a6c72f803e5524a596a2a655099c436c8b7286c0b" exitCode=0 Jan 23 16:26:25 crc kubenswrapper[4718]: I0123 16:26:25.085617 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" event={"ID":"6d67582c-e13c-49b0-ba38-842183da7019","Type":"ContainerDied","Data":"7490309c4295f736c74e001a6c72f803e5524a596a2a655099c436c8b7286c0b"} Jan 23 16:26:25 crc kubenswrapper[4718]: I0123 16:26:25.086055 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" event={"ID":"6d67582c-e13c-49b0-ba38-842183da7019","Type":"ContainerStarted","Data":"735b1a4aac18537ea84252b93e7b813d0012fc87c68e2080deadbdc7b7851735"} Jan 23 16:26:25 crc kubenswrapper[4718]: I0123 16:26:25.152064 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4985ab62-43a5-4fd8-919c-f9db2eea18f7" path="/var/lib/kubelet/pods/4985ab62-43a5-4fd8-919c-f9db2eea18f7/volumes" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.109751 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" event={"ID":"6d67582c-e13c-49b0-ba38-842183da7019","Type":"ContainerStarted","Data":"ab44e2d6a7f4444441cb16f969f717c4191d4e8e459871f54011d83983b3fcbc"} Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.110140 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" event={"ID":"6d67582c-e13c-49b0-ba38-842183da7019","Type":"ContainerStarted","Data":"3354585c9dd6b07e50a3f2d350fbae463db389ceae3107e812ac5e7b7ae4fb3e"} Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.110152 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" event={"ID":"6d67582c-e13c-49b0-ba38-842183da7019","Type":"ContainerStarted","Data":"e9ecc57df04d04e98c89d4a22c56d8a6ec19bfe9dbc533c2be0562847249f9b6"} Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.110164 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" event={"ID":"6d67582c-e13c-49b0-ba38-842183da7019","Type":"ContainerStarted","Data":"82052dfeba449e54cb3a9fc8721a768d7cc59a4877bcf5a5bb85463c6b2edca6"} Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.110178 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" event={"ID":"6d67582c-e13c-49b0-ba38-842183da7019","Type":"ContainerStarted","Data":"215b2c053cfbb250624af5d4672024fa9a5727f565cdf5c71ef7c71bae57259d"} Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.110187 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" event={"ID":"6d67582c-e13c-49b0-ba38-842183da7019","Type":"ContainerStarted","Data":"5b15e891bbe918c56f915102479cf8049899e46b0ac5b9f28b15d5cd184f72bb"} Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.124767 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h"] Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.126310 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.130591 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.130705 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-njh2t" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.131689 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.152253 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndkvn\" (UniqueName: \"kubernetes.io/projected/cbd98c6e-aa7f-4040-86a5-f3ef246c3a52-kube-api-access-ndkvn\") pod \"obo-prometheus-operator-68bc856cb9-zp26h\" (UID: \"cbd98c6e-aa7f-4040-86a5-f3ef246c3a52\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.246029 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8"] Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.247248 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.249172 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.249689 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-6tmpd" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.253485 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndkvn\" (UniqueName: \"kubernetes.io/projected/cbd98c6e-aa7f-4040-86a5-f3ef246c3a52-kube-api-access-ndkvn\") pod \"obo-prometheus-operator-68bc856cb9-zp26h\" (UID: \"cbd98c6e-aa7f-4040-86a5-f3ef246c3a52\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.263102 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4"] Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.264204 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.283818 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndkvn\" (UniqueName: \"kubernetes.io/projected/cbd98c6e-aa7f-4040-86a5-f3ef246c3a52-kube-api-access-ndkvn\") pod \"obo-prometheus-operator-68bc856cb9-zp26h\" (UID: \"cbd98c6e-aa7f-4040-86a5-f3ef246c3a52\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.355618 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c539786a-23c4-4f13-a3d7-d2166df63aed-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4\" (UID: \"c539786a-23c4-4f13-a3d7-d2166df63aed\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.355706 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/38c76550-362b-4f9e-b1fa-58de8a6356a9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8\" (UID: \"38c76550-362b-4f9e-b1fa-58de8a6356a9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.355807 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c539786a-23c4-4f13-a3d7-d2166df63aed-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4\" (UID: \"c539786a-23c4-4f13-a3d7-d2166df63aed\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.355852 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/38c76550-362b-4f9e-b1fa-58de8a6356a9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8\" (UID: \"38c76550-362b-4f9e-b1fa-58de8a6356a9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.449196 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-5qrhk"] Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.450064 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.450441 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.453128 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.457012 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c539786a-23c4-4f13-a3d7-d2166df63aed-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4\" (UID: \"c539786a-23c4-4f13-a3d7-d2166df63aed\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.457055 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-ns89m" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.457144 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/38c76550-362b-4f9e-b1fa-58de8a6356a9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8\" (UID: \"38c76550-362b-4f9e-b1fa-58de8a6356a9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.457223 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c539786a-23c4-4f13-a3d7-d2166df63aed-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4\" (UID: \"c539786a-23c4-4f13-a3d7-d2166df63aed\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.457282 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/38c76550-362b-4f9e-b1fa-58de8a6356a9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8\" (UID: \"38c76550-362b-4f9e-b1fa-58de8a6356a9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.462749 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c539786a-23c4-4f13-a3d7-d2166df63aed-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4\" (UID: \"c539786a-23c4-4f13-a3d7-d2166df63aed\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.463387 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/38c76550-362b-4f9e-b1fa-58de8a6356a9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8\" (UID: \"38c76550-362b-4f9e-b1fa-58de8a6356a9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.464460 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c539786a-23c4-4f13-a3d7-d2166df63aed-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4\" (UID: \"c539786a-23c4-4f13-a3d7-d2166df63aed\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.464929 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/38c76550-362b-4f9e-b1fa-58de8a6356a9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8\" (UID: \"38c76550-362b-4f9e-b1fa-58de8a6356a9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:26 crc kubenswrapper[4718]: E0123 16:26:26.485694 4718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zp26h_openshift-operators_cbd98c6e-aa7f-4040-86a5-f3ef246c3a52_0(439a35f4bab775fe0a7c7cf60826d7e5a21a4ceb826ecdbc307b4a84feb01904): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:26:26 crc kubenswrapper[4718]: E0123 16:26:26.485793 4718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zp26h_openshift-operators_cbd98c6e-aa7f-4040-86a5-f3ef246c3a52_0(439a35f4bab775fe0a7c7cf60826d7e5a21a4ceb826ecdbc307b4a84feb01904): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" Jan 23 16:26:26 crc kubenswrapper[4718]: E0123 16:26:26.485821 4718 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zp26h_openshift-operators_cbd98c6e-aa7f-4040-86a5-f3ef246c3a52_0(439a35f4bab775fe0a7c7cf60826d7e5a21a4ceb826ecdbc307b4a84feb01904): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" Jan 23 16:26:26 crc kubenswrapper[4718]: E0123 16:26:26.485880 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-zp26h_openshift-operators(cbd98c6e-aa7f-4040-86a5-f3ef246c3a52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-zp26h_openshift-operators(cbd98c6e-aa7f-4040-86a5-f3ef246c3a52)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zp26h_openshift-operators_cbd98c6e-aa7f-4040-86a5-f3ef246c3a52_0(439a35f4bab775fe0a7c7cf60826d7e5a21a4ceb826ecdbc307b4a84feb01904): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" podUID="cbd98c6e-aa7f-4040-86a5-f3ef246c3a52" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.558775 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0-observability-operator-tls\") pod \"observability-operator-59bdc8b94-5qrhk\" (UID: \"d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0\") " pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.558860 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79cb5\" (UniqueName: \"kubernetes.io/projected/d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0-kube-api-access-79cb5\") pod \"observability-operator-59bdc8b94-5qrhk\" (UID: \"d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0\") " pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.564598 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.582253 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:26 crc kubenswrapper[4718]: E0123 16:26:26.603568 4718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_openshift-operators_38c76550-362b-4f9e-b1fa-58de8a6356a9_0(68ce6de97738487e0cdf7c9d717579455aa586fc3c80b16f89ff1734c910e73c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:26:26 crc kubenswrapper[4718]: E0123 16:26:26.603649 4718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_openshift-operators_38c76550-362b-4f9e-b1fa-58de8a6356a9_0(68ce6de97738487e0cdf7c9d717579455aa586fc3c80b16f89ff1734c910e73c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:26 crc kubenswrapper[4718]: E0123 16:26:26.603677 4718 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_openshift-operators_38c76550-362b-4f9e-b1fa-58de8a6356a9_0(68ce6de97738487e0cdf7c9d717579455aa586fc3c80b16f89ff1734c910e73c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:26 crc kubenswrapper[4718]: E0123 16:26:26.603737 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_openshift-operators(38c76550-362b-4f9e-b1fa-58de8a6356a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_openshift-operators(38c76550-362b-4f9e-b1fa-58de8a6356a9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_openshift-operators_38c76550-362b-4f9e-b1fa-58de8a6356a9_0(68ce6de97738487e0cdf7c9d717579455aa586fc3c80b16f89ff1734c910e73c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" podUID="38c76550-362b-4f9e-b1fa-58de8a6356a9" Jan 23 16:26:26 crc kubenswrapper[4718]: E0123 16:26:26.618336 4718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_openshift-operators_c539786a-23c4-4f13-a3d7-d2166df63aed_0(52badc673fae8d4706038cf3a17e77dc99585ff9407f44ac39d0750817039216): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:26:26 crc kubenswrapper[4718]: E0123 16:26:26.618397 4718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_openshift-operators_c539786a-23c4-4f13-a3d7-d2166df63aed_0(52badc673fae8d4706038cf3a17e77dc99585ff9407f44ac39d0750817039216): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:26 crc kubenswrapper[4718]: E0123 16:26:26.618421 4718 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_openshift-operators_c539786a-23c4-4f13-a3d7-d2166df63aed_0(52badc673fae8d4706038cf3a17e77dc99585ff9407f44ac39d0750817039216): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:26 crc kubenswrapper[4718]: E0123 16:26:26.618481 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_openshift-operators(c539786a-23c4-4f13-a3d7-d2166df63aed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_openshift-operators(c539786a-23c4-4f13-a3d7-d2166df63aed)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_openshift-operators_c539786a-23c4-4f13-a3d7-d2166df63aed_0(52badc673fae8d4706038cf3a17e77dc99585ff9407f44ac39d0750817039216): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" podUID="c539786a-23c4-4f13-a3d7-d2166df63aed" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.659370 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-b9lkr"] Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.660340 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0-observability-operator-tls\") pod \"observability-operator-59bdc8b94-5qrhk\" (UID: \"d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0\") " pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.660412 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.660451 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79cb5\" (UniqueName: \"kubernetes.io/projected/d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0-kube-api-access-79cb5\") pod \"observability-operator-59bdc8b94-5qrhk\" (UID: \"d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0\") " pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.667261 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-9j265" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.667699 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0-observability-operator-tls\") pod \"observability-operator-59bdc8b94-5qrhk\" (UID: \"d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0\") " pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.696318 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79cb5\" (UniqueName: \"kubernetes.io/projected/d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0-kube-api-access-79cb5\") pod \"observability-operator-59bdc8b94-5qrhk\" (UID: \"d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0\") " pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.761733 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a751218-1b91-4c7f-be34-ea4036ca440f-openshift-service-ca\") pod \"perses-operator-5bf474d74f-b9lkr\" (UID: \"8a751218-1b91-4c7f-be34-ea4036ca440f\") " pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.761838 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlbr8\" (UniqueName: \"kubernetes.io/projected/8a751218-1b91-4c7f-be34-ea4036ca440f-kube-api-access-dlbr8\") pod \"perses-operator-5bf474d74f-b9lkr\" (UID: \"8a751218-1b91-4c7f-be34-ea4036ca440f\") " pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.821358 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:26:26 crc kubenswrapper[4718]: E0123 16:26:26.857906 4718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5qrhk_openshift-operators_d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0_0(88152edaab12972938248cca2f97f04fe1251fb6fba1798c464ee74d01d5adb9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:26:26 crc kubenswrapper[4718]: E0123 16:26:26.858011 4718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5qrhk_openshift-operators_d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0_0(88152edaab12972938248cca2f97f04fe1251fb6fba1798c464ee74d01d5adb9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:26:26 crc kubenswrapper[4718]: E0123 16:26:26.858037 4718 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5qrhk_openshift-operators_d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0_0(88152edaab12972938248cca2f97f04fe1251fb6fba1798c464ee74d01d5adb9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:26:26 crc kubenswrapper[4718]: E0123 16:26:26.858086 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-5qrhk_openshift-operators(d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-5qrhk_openshift-operators(d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5qrhk_openshift-operators_d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0_0(88152edaab12972938248cca2f97f04fe1251fb6fba1798c464ee74d01d5adb9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" podUID="d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.862772 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a751218-1b91-4c7f-be34-ea4036ca440f-openshift-service-ca\") pod \"perses-operator-5bf474d74f-b9lkr\" (UID: \"8a751218-1b91-4c7f-be34-ea4036ca440f\") " pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.862874 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlbr8\" (UniqueName: \"kubernetes.io/projected/8a751218-1b91-4c7f-be34-ea4036ca440f-kube-api-access-dlbr8\") pod \"perses-operator-5bf474d74f-b9lkr\" (UID: \"8a751218-1b91-4c7f-be34-ea4036ca440f\") " pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.863784 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a751218-1b91-4c7f-be34-ea4036ca440f-openshift-service-ca\") pod \"perses-operator-5bf474d74f-b9lkr\" (UID: \"8a751218-1b91-4c7f-be34-ea4036ca440f\") " pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.887343 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlbr8\" (UniqueName: \"kubernetes.io/projected/8a751218-1b91-4c7f-be34-ea4036ca440f-kube-api-access-dlbr8\") pod \"perses-operator-5bf474d74f-b9lkr\" (UID: \"8a751218-1b91-4c7f-be34-ea4036ca440f\") " pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:26 crc kubenswrapper[4718]: I0123 16:26:26.978963 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:27 crc kubenswrapper[4718]: E0123 16:26:27.010829 4718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b9lkr_openshift-operators_8a751218-1b91-4c7f-be34-ea4036ca440f_0(321f2a5aa149d41b5bd54958a0bd97f09c429d722e546c6fde1b7156c1e042a8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:26:27 crc kubenswrapper[4718]: E0123 16:26:27.011025 4718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b9lkr_openshift-operators_8a751218-1b91-4c7f-be34-ea4036ca440f_0(321f2a5aa149d41b5bd54958a0bd97f09c429d722e546c6fde1b7156c1e042a8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:27 crc kubenswrapper[4718]: E0123 16:26:27.011135 4718 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b9lkr_openshift-operators_8a751218-1b91-4c7f-be34-ea4036ca440f_0(321f2a5aa149d41b5bd54958a0bd97f09c429d722e546c6fde1b7156c1e042a8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:27 crc kubenswrapper[4718]: E0123 16:26:27.011268 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-b9lkr_openshift-operators(8a751218-1b91-4c7f-be34-ea4036ca440f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-b9lkr_openshift-operators(8a751218-1b91-4c7f-be34-ea4036ca440f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b9lkr_openshift-operators_8a751218-1b91-4c7f-be34-ea4036ca440f_0(321f2a5aa149d41b5bd54958a0bd97f09c429d722e546c6fde1b7156c1e042a8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" podUID="8a751218-1b91-4c7f-be34-ea4036ca440f" Jan 23 16:26:28 crc kubenswrapper[4718]: I0123 16:26:28.876285 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:26:28 crc kubenswrapper[4718]: I0123 16:26:28.877810 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:26:29 crc kubenswrapper[4718]: I0123 16:26:29.134492 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" event={"ID":"6d67582c-e13c-49b0-ba38-842183da7019","Type":"ContainerStarted","Data":"d22c64aff403aadbc214453de643da4468a6efccd48e7de82c27bbb13b7c0ec4"} Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.153832 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" event={"ID":"6d67582c-e13c-49b0-ba38-842183da7019","Type":"ContainerStarted","Data":"2f2e515e510776b3b0ec3cacab6d8972bd7a63292a2abbc371c4fe7eba88a498"} Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.156473 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.156526 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.156818 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.193214 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.193754 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.197258 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" podStartSLOduration=7.197233746 podStartE2EDuration="7.197233746s" podCreationTimestamp="2026-01-23 16:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:26:31.192043232 +0000 UTC m=+592.339285223" watchObservedRunningTime="2026-01-23 16:26:31.197233746 +0000 UTC m=+592.344475737" Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.648909 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-b9lkr"] Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.649074 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.649728 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.657352 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8"] Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.657510 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.658126 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.667157 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h"] Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.667309 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.667903 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.669462 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-5qrhk"] Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.669620 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.670254 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.683394 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4"] Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.683561 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:31 crc kubenswrapper[4718]: I0123 16:26:31.684432 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.732371 4718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zp26h_openshift-operators_cbd98c6e-aa7f-4040-86a5-f3ef246c3a52_0(9ac789af1f4424db043a25a5a4c035acd13177d1349df8e6455dc920abee640a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.732460 4718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zp26h_openshift-operators_cbd98c6e-aa7f-4040-86a5-f3ef246c3a52_0(9ac789af1f4424db043a25a5a4c035acd13177d1349df8e6455dc920abee640a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.732487 4718 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zp26h_openshift-operators_cbd98c6e-aa7f-4040-86a5-f3ef246c3a52_0(9ac789af1f4424db043a25a5a4c035acd13177d1349df8e6455dc920abee640a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.732549 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-zp26h_openshift-operators(cbd98c6e-aa7f-4040-86a5-f3ef246c3a52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-zp26h_openshift-operators(cbd98c6e-aa7f-4040-86a5-f3ef246c3a52)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zp26h_openshift-operators_cbd98c6e-aa7f-4040-86a5-f3ef246c3a52_0(9ac789af1f4424db043a25a5a4c035acd13177d1349df8e6455dc920abee640a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" podUID="cbd98c6e-aa7f-4040-86a5-f3ef246c3a52" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.738902 4718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_openshift-operators_38c76550-362b-4f9e-b1fa-58de8a6356a9_0(1c7dcb093c47a372445930c63cf00b43d40b5b82aca355f62e87c4eb7d69459b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.738961 4718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_openshift-operators_38c76550-362b-4f9e-b1fa-58de8a6356a9_0(1c7dcb093c47a372445930c63cf00b43d40b5b82aca355f62e87c4eb7d69459b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.738987 4718 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_openshift-operators_38c76550-362b-4f9e-b1fa-58de8a6356a9_0(1c7dcb093c47a372445930c63cf00b43d40b5b82aca355f62e87c4eb7d69459b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.739024 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_openshift-operators(38c76550-362b-4f9e-b1fa-58de8a6356a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_openshift-operators(38c76550-362b-4f9e-b1fa-58de8a6356a9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_openshift-operators_38c76550-362b-4f9e-b1fa-58de8a6356a9_0(1c7dcb093c47a372445930c63cf00b43d40b5b82aca355f62e87c4eb7d69459b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" podUID="38c76550-362b-4f9e-b1fa-58de8a6356a9" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.744503 4718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b9lkr_openshift-operators_8a751218-1b91-4c7f-be34-ea4036ca440f_0(3c6223cd37c71a80a53afa7d052b7ec807c0bb70c4d64dab4ccf926a0c7b21e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.744560 4718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b9lkr_openshift-operators_8a751218-1b91-4c7f-be34-ea4036ca440f_0(3c6223cd37c71a80a53afa7d052b7ec807c0bb70c4d64dab4ccf926a0c7b21e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.744580 4718 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b9lkr_openshift-operators_8a751218-1b91-4c7f-be34-ea4036ca440f_0(3c6223cd37c71a80a53afa7d052b7ec807c0bb70c4d64dab4ccf926a0c7b21e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.744614 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-b9lkr_openshift-operators(8a751218-1b91-4c7f-be34-ea4036ca440f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-b9lkr_openshift-operators(8a751218-1b91-4c7f-be34-ea4036ca440f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b9lkr_openshift-operators_8a751218-1b91-4c7f-be34-ea4036ca440f_0(3c6223cd37c71a80a53afa7d052b7ec807c0bb70c4d64dab4ccf926a0c7b21e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" podUID="8a751218-1b91-4c7f-be34-ea4036ca440f" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.761305 4718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5qrhk_openshift-operators_d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0_0(245c095d19a2a08e159fdfb060a7f8cbb03e7c23c356b1b1dcc47a0e8ab6c721): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.761379 4718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5qrhk_openshift-operators_d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0_0(245c095d19a2a08e159fdfb060a7f8cbb03e7c23c356b1b1dcc47a0e8ab6c721): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.761405 4718 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5qrhk_openshift-operators_d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0_0(245c095d19a2a08e159fdfb060a7f8cbb03e7c23c356b1b1dcc47a0e8ab6c721): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.761449 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-5qrhk_openshift-operators(d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-5qrhk_openshift-operators(d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5qrhk_openshift-operators_d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0_0(245c095d19a2a08e159fdfb060a7f8cbb03e7c23c356b1b1dcc47a0e8ab6c721): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" podUID="d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.768166 4718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_openshift-operators_c539786a-23c4-4f13-a3d7-d2166df63aed_0(5579a121498578ba78455077c13191a11ef2ffeb3f61d0bf7477694b79b55af0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.768224 4718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_openshift-operators_c539786a-23c4-4f13-a3d7-d2166df63aed_0(5579a121498578ba78455077c13191a11ef2ffeb3f61d0bf7477694b79b55af0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.768247 4718 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_openshift-operators_c539786a-23c4-4f13-a3d7-d2166df63aed_0(5579a121498578ba78455077c13191a11ef2ffeb3f61d0bf7477694b79b55af0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:31 crc kubenswrapper[4718]: E0123 16:26:31.768287 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_openshift-operators(c539786a-23c4-4f13-a3d7-d2166df63aed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_openshift-operators(c539786a-23c4-4f13-a3d7-d2166df63aed)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_openshift-operators_c539786a-23c4-4f13-a3d7-d2166df63aed_0(5579a121498578ba78455077c13191a11ef2ffeb3f61d0bf7477694b79b55af0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" podUID="c539786a-23c4-4f13-a3d7-d2166df63aed" Jan 23 16:26:34 crc kubenswrapper[4718]: I0123 16:26:34.140667 4718 scope.go:117] "RemoveContainer" containerID="057f3efdeba4092338077df8e639b0ee0cb35cf8330d07c93524611cb0317bed" Jan 23 16:26:34 crc kubenswrapper[4718]: E0123 16:26:34.141291 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-tb79v_openshift-multus(d2a07769-1921-4484-b1cd-28b23487bb39)\"" pod="openshift-multus/multus-tb79v" podUID="d2a07769-1921-4484-b1cd-28b23487bb39" Jan 23 16:26:43 crc kubenswrapper[4718]: I0123 16:26:43.140295 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:43 crc kubenswrapper[4718]: I0123 16:26:43.141783 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:43 crc kubenswrapper[4718]: E0123 16:26:43.172960 4718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_openshift-operators_38c76550-362b-4f9e-b1fa-58de8a6356a9_0(03daa23d349a1fc976a47782c701e05d4204a909937194be97a4e3b27ee2e228): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:26:43 crc kubenswrapper[4718]: E0123 16:26:43.173461 4718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_openshift-operators_38c76550-362b-4f9e-b1fa-58de8a6356a9_0(03daa23d349a1fc976a47782c701e05d4204a909937194be97a4e3b27ee2e228): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:43 crc kubenswrapper[4718]: E0123 16:26:43.173490 4718 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_openshift-operators_38c76550-362b-4f9e-b1fa-58de8a6356a9_0(03daa23d349a1fc976a47782c701e05d4204a909937194be97a4e3b27ee2e228): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:43 crc kubenswrapper[4718]: E0123 16:26:43.173551 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_openshift-operators(38c76550-362b-4f9e-b1fa-58de8a6356a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_openshift-operators(38c76550-362b-4f9e-b1fa-58de8a6356a9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_openshift-operators_38c76550-362b-4f9e-b1fa-58de8a6356a9_0(03daa23d349a1fc976a47782c701e05d4204a909937194be97a4e3b27ee2e228): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" podUID="38c76550-362b-4f9e-b1fa-58de8a6356a9" Jan 23 16:26:45 crc kubenswrapper[4718]: I0123 16:26:45.140004 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:45 crc kubenswrapper[4718]: I0123 16:26:45.140033 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" Jan 23 16:26:45 crc kubenswrapper[4718]: I0123 16:26:45.141519 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:45 crc kubenswrapper[4718]: I0123 16:26:45.141795 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" Jan 23 16:26:45 crc kubenswrapper[4718]: E0123 16:26:45.184751 4718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zp26h_openshift-operators_cbd98c6e-aa7f-4040-86a5-f3ef246c3a52_0(1adbb22c181239af858af68bc542ee87eb7ee77278b1db55bf3a8f1952bf9b0d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:26:45 crc kubenswrapper[4718]: E0123 16:26:45.185285 4718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zp26h_openshift-operators_cbd98c6e-aa7f-4040-86a5-f3ef246c3a52_0(1adbb22c181239af858af68bc542ee87eb7ee77278b1db55bf3a8f1952bf9b0d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" Jan 23 16:26:45 crc kubenswrapper[4718]: E0123 16:26:45.185311 4718 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zp26h_openshift-operators_cbd98c6e-aa7f-4040-86a5-f3ef246c3a52_0(1adbb22c181239af858af68bc542ee87eb7ee77278b1db55bf3a8f1952bf9b0d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" Jan 23 16:26:45 crc kubenswrapper[4718]: E0123 16:26:45.185372 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-zp26h_openshift-operators(cbd98c6e-aa7f-4040-86a5-f3ef246c3a52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-zp26h_openshift-operators(cbd98c6e-aa7f-4040-86a5-f3ef246c3a52)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zp26h_openshift-operators_cbd98c6e-aa7f-4040-86a5-f3ef246c3a52_0(1adbb22c181239af858af68bc542ee87eb7ee77278b1db55bf3a8f1952bf9b0d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" podUID="cbd98c6e-aa7f-4040-86a5-f3ef246c3a52" Jan 23 16:26:45 crc kubenswrapper[4718]: E0123 16:26:45.198689 4718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_openshift-operators_c539786a-23c4-4f13-a3d7-d2166df63aed_0(ecdb1aa4ae57d750013466c3206e65e405aa7f41efb0443beeed16b7853e08dd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:26:45 crc kubenswrapper[4718]: E0123 16:26:45.198785 4718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_openshift-operators_c539786a-23c4-4f13-a3d7-d2166df63aed_0(ecdb1aa4ae57d750013466c3206e65e405aa7f41efb0443beeed16b7853e08dd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:45 crc kubenswrapper[4718]: E0123 16:26:45.198815 4718 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_openshift-operators_c539786a-23c4-4f13-a3d7-d2166df63aed_0(ecdb1aa4ae57d750013466c3206e65e405aa7f41efb0443beeed16b7853e08dd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:45 crc kubenswrapper[4718]: E0123 16:26:45.198886 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_openshift-operators(c539786a-23c4-4f13-a3d7-d2166df63aed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_openshift-operators(c539786a-23c4-4f13-a3d7-d2166df63aed)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_openshift-operators_c539786a-23c4-4f13-a3d7-d2166df63aed_0(ecdb1aa4ae57d750013466c3206e65e405aa7f41efb0443beeed16b7853e08dd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" podUID="c539786a-23c4-4f13-a3d7-d2166df63aed" Jan 23 16:26:46 crc kubenswrapper[4718]: I0123 16:26:46.139368 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:26:46 crc kubenswrapper[4718]: I0123 16:26:46.140259 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:26:46 crc kubenswrapper[4718]: I0123 16:26:46.141875 4718 scope.go:117] "RemoveContainer" containerID="057f3efdeba4092338077df8e639b0ee0cb35cf8330d07c93524611cb0317bed" Jan 23 16:26:46 crc kubenswrapper[4718]: I0123 16:26:46.142254 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:46 crc kubenswrapper[4718]: I0123 16:26:46.142533 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:46 crc kubenswrapper[4718]: E0123 16:26:46.202309 4718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5qrhk_openshift-operators_d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0_0(9a6dc3d954b6af46cb731f4c72b97ecdbff034205d36dfd8901ec8e4eec72a76): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:26:46 crc kubenswrapper[4718]: E0123 16:26:46.202396 4718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5qrhk_openshift-operators_d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0_0(9a6dc3d954b6af46cb731f4c72b97ecdbff034205d36dfd8901ec8e4eec72a76): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:26:46 crc kubenswrapper[4718]: E0123 16:26:46.202417 4718 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5qrhk_openshift-operators_d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0_0(9a6dc3d954b6af46cb731f4c72b97ecdbff034205d36dfd8901ec8e4eec72a76): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:26:46 crc kubenswrapper[4718]: E0123 16:26:46.202471 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-5qrhk_openshift-operators(d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-5qrhk_openshift-operators(d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5qrhk_openshift-operators_d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0_0(9a6dc3d954b6af46cb731f4c72b97ecdbff034205d36dfd8901ec8e4eec72a76): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" podUID="d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0" Jan 23 16:26:46 crc kubenswrapper[4718]: E0123 16:26:46.213815 4718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b9lkr_openshift-operators_8a751218-1b91-4c7f-be34-ea4036ca440f_0(0f42ebb2238c63e2e98f512d45a73e4c66cbec4bcf1a2c56bb3e91c47ab90939): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:26:46 crc kubenswrapper[4718]: E0123 16:26:46.213907 4718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b9lkr_openshift-operators_8a751218-1b91-4c7f-be34-ea4036ca440f_0(0f42ebb2238c63e2e98f512d45a73e4c66cbec4bcf1a2c56bb3e91c47ab90939): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:46 crc kubenswrapper[4718]: E0123 16:26:46.213933 4718 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b9lkr_openshift-operators_8a751218-1b91-4c7f-be34-ea4036ca440f_0(0f42ebb2238c63e2e98f512d45a73e4c66cbec4bcf1a2c56bb3e91c47ab90939): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:46 crc kubenswrapper[4718]: E0123 16:26:46.213993 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-b9lkr_openshift-operators(8a751218-1b91-4c7f-be34-ea4036ca440f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-b9lkr_openshift-operators(8a751218-1b91-4c7f-be34-ea4036ca440f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b9lkr_openshift-operators_8a751218-1b91-4c7f-be34-ea4036ca440f_0(0f42ebb2238c63e2e98f512d45a73e4c66cbec4bcf1a2c56bb3e91c47ab90939): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" podUID="8a751218-1b91-4c7f-be34-ea4036ca440f" Jan 23 16:26:47 crc kubenswrapper[4718]: I0123 16:26:47.268089 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tb79v_d2a07769-1921-4484-b1cd-28b23487bb39/kube-multus/2.log" Jan 23 16:26:47 crc kubenswrapper[4718]: I0123 16:26:47.268473 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tb79v" event={"ID":"d2a07769-1921-4484-b1cd-28b23487bb39","Type":"ContainerStarted","Data":"04ed3ef9878a73f2206ab47c3bee845d02f2c01eab7c5c6a7311f7f4c750b768"} Jan 23 16:26:54 crc kubenswrapper[4718]: I0123 16:26:54.519774 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cs66r" Jan 23 16:26:55 crc kubenswrapper[4718]: I0123 16:26:55.140081 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:55 crc kubenswrapper[4718]: I0123 16:26:55.140948 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" Jan 23 16:26:55 crc kubenswrapper[4718]: I0123 16:26:55.414238 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8"] Jan 23 16:26:56 crc kubenswrapper[4718]: I0123 16:26:56.338607 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" event={"ID":"38c76550-362b-4f9e-b1fa-58de8a6356a9","Type":"ContainerStarted","Data":"7d7bf3488b3340da4a0aaa9f854d56db332ad73a8eae5f369ca3b563a2cc2316"} Jan 23 16:26:57 crc kubenswrapper[4718]: I0123 16:26:57.144090 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:57 crc kubenswrapper[4718]: I0123 16:26:57.144305 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:26:57 crc kubenswrapper[4718]: I0123 16:26:57.423307 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-b9lkr"] Jan 23 16:26:57 crc kubenswrapper[4718]: W0123 16:26:57.433368 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a751218_1b91_4c7f_be34_ea4036ca440f.slice/crio-ee793548a72f28bc4be742658ff1b140787e543a0e10719c3e3bfaff6569d231 WatchSource:0}: Error finding container ee793548a72f28bc4be742658ff1b140787e543a0e10719c3e3bfaff6569d231: Status 404 returned error can't find the container with id ee793548a72f28bc4be742658ff1b140787e543a0e10719c3e3bfaff6569d231 Jan 23 16:26:58 crc kubenswrapper[4718]: I0123 16:26:58.139829 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:58 crc kubenswrapper[4718]: I0123 16:26:58.140313 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" Jan 23 16:26:58 crc kubenswrapper[4718]: I0123 16:26:58.357651 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" event={"ID":"8a751218-1b91-4c7f-be34-ea4036ca440f","Type":"ContainerStarted","Data":"ee793548a72f28bc4be742658ff1b140787e543a0e10719c3e3bfaff6569d231"} Jan 23 16:26:58 crc kubenswrapper[4718]: I0123 16:26:58.360042 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4"] Jan 23 16:26:58 crc kubenswrapper[4718]: I0123 16:26:58.875712 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:26:58 crc kubenswrapper[4718]: I0123 16:26:58.875841 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:26:59 crc kubenswrapper[4718]: W0123 16:26:59.553710 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc539786a_23c4_4f13_a3d7_d2166df63aed.slice/crio-b3a29c4b3b67a5edaabd0d1736b0d574856aeeb9241dd31231d7587978c23f19 WatchSource:0}: Error finding container b3a29c4b3b67a5edaabd0d1736b0d574856aeeb9241dd31231d7587978c23f19: Status 404 returned error can't find the container with id b3a29c4b3b67a5edaabd0d1736b0d574856aeeb9241dd31231d7587978c23f19 Jan 23 16:27:00 crc kubenswrapper[4718]: I0123 16:27:00.140405 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:27:00 crc kubenswrapper[4718]: I0123 16:27:00.140431 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" Jan 23 16:27:00 crc kubenswrapper[4718]: I0123 16:27:00.141758 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:27:00 crc kubenswrapper[4718]: I0123 16:27:00.141941 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" Jan 23 16:27:00 crc kubenswrapper[4718]: I0123 16:27:00.375919 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" event={"ID":"c539786a-23c4-4f13-a3d7-d2166df63aed","Type":"ContainerStarted","Data":"b3a29c4b3b67a5edaabd0d1736b0d574856aeeb9241dd31231d7587978c23f19"} Jan 23 16:27:00 crc kubenswrapper[4718]: I0123 16:27:00.377382 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" event={"ID":"38c76550-362b-4f9e-b1fa-58de8a6356a9","Type":"ContainerStarted","Data":"e458483a252684ce6601ca2af5b344a56db83ca5ccacd9332406a6f48b21a547"} Jan 23 16:27:00 crc kubenswrapper[4718]: I0123 16:27:00.394351 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8" podStartSLOduration=29.868450237 podStartE2EDuration="34.394327035s" podCreationTimestamp="2026-01-23 16:26:26 +0000 UTC" firstStartedPulling="2026-01-23 16:26:55.429880649 +0000 UTC m=+616.577122650" lastFinishedPulling="2026-01-23 16:26:59.955757457 +0000 UTC m=+621.102999448" observedRunningTime="2026-01-23 16:27:00.392478224 +0000 UTC m=+621.539720215" watchObservedRunningTime="2026-01-23 16:27:00.394327035 +0000 UTC m=+621.541569036" Jan 23 16:27:01 crc kubenswrapper[4718]: I0123 16:27:01.386859 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" event={"ID":"8a751218-1b91-4c7f-be34-ea4036ca440f","Type":"ContainerStarted","Data":"e25654a341444dddef7b80caf39c607c36b29b5e805e66b8a5c659bf9a1d70a0"} Jan 23 16:27:01 crc kubenswrapper[4718]: I0123 16:27:01.387358 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:27:01 crc kubenswrapper[4718]: I0123 16:27:01.410762 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" podStartSLOduration=31.644014361 podStartE2EDuration="35.410735021s" podCreationTimestamp="2026-01-23 16:26:26 +0000 UTC" firstStartedPulling="2026-01-23 16:26:57.436195236 +0000 UTC m=+618.583437237" lastFinishedPulling="2026-01-23 16:27:01.202915906 +0000 UTC m=+622.350157897" observedRunningTime="2026-01-23 16:27:01.403623284 +0000 UTC m=+622.550865275" watchObservedRunningTime="2026-01-23 16:27:01.410735021 +0000 UTC m=+622.557977022" Jan 23 16:27:01 crc kubenswrapper[4718]: I0123 16:27:01.599796 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h"] Jan 23 16:27:01 crc kubenswrapper[4718]: W0123 16:27:01.610289 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbd98c6e_aa7f_4040_86a5_f3ef246c3a52.slice/crio-e5e9adf4f64df3831e27d67fe4d961ff76d0433aa2a1d45e5ab4bec4f91d7ccf WatchSource:0}: Error finding container e5e9adf4f64df3831e27d67fe4d961ff76d0433aa2a1d45e5ab4bec4f91d7ccf: Status 404 returned error can't find the container with id e5e9adf4f64df3831e27d67fe4d961ff76d0433aa2a1d45e5ab4bec4f91d7ccf Jan 23 16:27:01 crc kubenswrapper[4718]: I0123 16:27:01.723144 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-5qrhk"] Jan 23 16:27:01 crc kubenswrapper[4718]: W0123 16:27:01.725214 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8c12e8f_cd36_44e0_9a71_d3b3e5bff8d0.slice/crio-68e505126238fd5dda7fb7b572a253b03c9b60b38862f018eb82efcf33c6c20f WatchSource:0}: Error finding container 68e505126238fd5dda7fb7b572a253b03c9b60b38862f018eb82efcf33c6c20f: Status 404 returned error can't find the container with id 68e505126238fd5dda7fb7b572a253b03c9b60b38862f018eb82efcf33c6c20f Jan 23 16:27:02 crc kubenswrapper[4718]: I0123 16:27:02.392210 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" event={"ID":"c539786a-23c4-4f13-a3d7-d2166df63aed","Type":"ContainerStarted","Data":"61307c5ad5874c5e8a22c755aa057f5dd3f7a65f910ed76bc09a43199c3d8c60"} Jan 23 16:27:02 crc kubenswrapper[4718]: I0123 16:27:02.393439 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" event={"ID":"cbd98c6e-aa7f-4040-86a5-f3ef246c3a52","Type":"ContainerStarted","Data":"e5e9adf4f64df3831e27d67fe4d961ff76d0433aa2a1d45e5ab4bec4f91d7ccf"} Jan 23 16:27:02 crc kubenswrapper[4718]: I0123 16:27:02.394748 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" event={"ID":"d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0","Type":"ContainerStarted","Data":"68e505126238fd5dda7fb7b572a253b03c9b60b38862f018eb82efcf33c6c20f"} Jan 23 16:27:02 crc kubenswrapper[4718]: I0123 16:27:02.417470 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4" podStartSLOduration=34.778583062 podStartE2EDuration="36.417445878s" podCreationTimestamp="2026-01-23 16:26:26 +0000 UTC" firstStartedPulling="2026-01-23 16:26:59.558230899 +0000 UTC m=+620.705472890" lastFinishedPulling="2026-01-23 16:27:01.197093715 +0000 UTC m=+622.344335706" observedRunningTime="2026-01-23 16:27:02.413856379 +0000 UTC m=+623.561098380" watchObservedRunningTime="2026-01-23 16:27:02.417445878 +0000 UTC m=+623.564687859" Jan 23 16:27:05 crc kubenswrapper[4718]: I0123 16:27:05.425217 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" event={"ID":"cbd98c6e-aa7f-4040-86a5-f3ef246c3a52","Type":"ContainerStarted","Data":"4883df38ab17b6cad6e905b949100d1c7696f01170a1c769e38344923fe4dcf3"} Jan 23 16:27:05 crc kubenswrapper[4718]: I0123 16:27:05.473043 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zp26h" podStartSLOduration=36.70681949 podStartE2EDuration="39.473010645s" podCreationTimestamp="2026-01-23 16:26:26 +0000 UTC" firstStartedPulling="2026-01-23 16:27:01.635121017 +0000 UTC m=+622.782363018" lastFinishedPulling="2026-01-23 16:27:04.401312182 +0000 UTC m=+625.548554173" observedRunningTime="2026-01-23 16:27:05.46492858 +0000 UTC m=+626.612170581" watchObservedRunningTime="2026-01-23 16:27:05.473010645 +0000 UTC m=+626.620252636" Jan 23 16:27:06 crc kubenswrapper[4718]: I0123 16:27:06.985468 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-b9lkr" Jan 23 16:27:08 crc kubenswrapper[4718]: I0123 16:27:08.464907 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" event={"ID":"d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0","Type":"ContainerStarted","Data":"073d06a875f16b486c87eb0cc8cb7aaf8d92ec91174891d3524ed4b42de3baf2"} Jan 23 16:27:08 crc kubenswrapper[4718]: I0123 16:27:08.465356 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:27:08 crc kubenswrapper[4718]: I0123 16:27:08.479539 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" Jan 23 16:27:08 crc kubenswrapper[4718]: I0123 16:27:08.549570 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-5qrhk" podStartSLOduration=36.665463441 podStartE2EDuration="42.549544384s" podCreationTimestamp="2026-01-23 16:26:26 +0000 UTC" firstStartedPulling="2026-01-23 16:27:01.729784038 +0000 UTC m=+622.877026029" lastFinishedPulling="2026-01-23 16:27:07.613864941 +0000 UTC m=+628.761106972" observedRunningTime="2026-01-23 16:27:08.51913446 +0000 UTC m=+629.666376451" watchObservedRunningTime="2026-01-23 16:27:08.549544384 +0000 UTC m=+629.696786375" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.634342 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-sntwx"] Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.635761 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-sntwx" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.638504 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.639031 4718 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-vj7rc" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.639064 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.657463 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-8jlw4"] Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.658597 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-8jlw4" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.662235 4718 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-jnxnb" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.671287 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-sntwx"] Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.685811 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-8jlw4"] Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.690518 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-br6fl"] Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.691943 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-br6fl" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.694178 4718 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-t5984" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.704729 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-br6fl"] Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.781832 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbflp\" (UniqueName: \"kubernetes.io/projected/1e5ee60b-7363-4a74-b69d-1f4f474166e0-kube-api-access-rbflp\") pod \"cert-manager-858654f9db-8jlw4\" (UID: \"1e5ee60b-7363-4a74-b69d-1f4f474166e0\") " pod="cert-manager/cert-manager-858654f9db-8jlw4" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.781913 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-764ht\" (UniqueName: \"kubernetes.io/projected/f99e5457-16fb-453f-909c-a8364ffc0372-kube-api-access-764ht\") pod \"cert-manager-webhook-687f57d79b-br6fl\" (UID: \"f99e5457-16fb-453f-909c-a8364ffc0372\") " pod="cert-manager/cert-manager-webhook-687f57d79b-br6fl" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.781975 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b68z6\" (UniqueName: \"kubernetes.io/projected/57379fa4-b935-4095-a6c1-9e83709c5906-kube-api-access-b68z6\") pod \"cert-manager-cainjector-cf98fcc89-sntwx\" (UID: \"57379fa4-b935-4095-a6c1-9e83709c5906\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-sntwx" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.883341 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b68z6\" (UniqueName: \"kubernetes.io/projected/57379fa4-b935-4095-a6c1-9e83709c5906-kube-api-access-b68z6\") pod \"cert-manager-cainjector-cf98fcc89-sntwx\" (UID: \"57379fa4-b935-4095-a6c1-9e83709c5906\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-sntwx" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.883510 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbflp\" (UniqueName: \"kubernetes.io/projected/1e5ee60b-7363-4a74-b69d-1f4f474166e0-kube-api-access-rbflp\") pod \"cert-manager-858654f9db-8jlw4\" (UID: \"1e5ee60b-7363-4a74-b69d-1f4f474166e0\") " pod="cert-manager/cert-manager-858654f9db-8jlw4" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.883541 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-764ht\" (UniqueName: \"kubernetes.io/projected/f99e5457-16fb-453f-909c-a8364ffc0372-kube-api-access-764ht\") pod \"cert-manager-webhook-687f57d79b-br6fl\" (UID: \"f99e5457-16fb-453f-909c-a8364ffc0372\") " pod="cert-manager/cert-manager-webhook-687f57d79b-br6fl" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.902594 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b68z6\" (UniqueName: \"kubernetes.io/projected/57379fa4-b935-4095-a6c1-9e83709c5906-kube-api-access-b68z6\") pod \"cert-manager-cainjector-cf98fcc89-sntwx\" (UID: \"57379fa4-b935-4095-a6c1-9e83709c5906\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-sntwx" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.907822 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-764ht\" (UniqueName: \"kubernetes.io/projected/f99e5457-16fb-453f-909c-a8364ffc0372-kube-api-access-764ht\") pod \"cert-manager-webhook-687f57d79b-br6fl\" (UID: \"f99e5457-16fb-453f-909c-a8364ffc0372\") " pod="cert-manager/cert-manager-webhook-687f57d79b-br6fl" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.908128 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbflp\" (UniqueName: \"kubernetes.io/projected/1e5ee60b-7363-4a74-b69d-1f4f474166e0-kube-api-access-rbflp\") pod \"cert-manager-858654f9db-8jlw4\" (UID: \"1e5ee60b-7363-4a74-b69d-1f4f474166e0\") " pod="cert-manager/cert-manager-858654f9db-8jlw4" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.958492 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-sntwx" Jan 23 16:27:15 crc kubenswrapper[4718]: I0123 16:27:15.977341 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-8jlw4" Jan 23 16:27:16 crc kubenswrapper[4718]: I0123 16:27:16.009349 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-br6fl" Jan 23 16:27:16 crc kubenswrapper[4718]: I0123 16:27:16.322035 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-sntwx"] Jan 23 16:27:16 crc kubenswrapper[4718]: W0123 16:27:16.329122 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57379fa4_b935_4095_a6c1_9e83709c5906.slice/crio-92ff646fba12fc03a90f21cd345cee360fee7d1acf9d03eda7ac11592785578a WatchSource:0}: Error finding container 92ff646fba12fc03a90f21cd345cee360fee7d1acf9d03eda7ac11592785578a: Status 404 returned error can't find the container with id 92ff646fba12fc03a90f21cd345cee360fee7d1acf9d03eda7ac11592785578a Jan 23 16:27:16 crc kubenswrapper[4718]: I0123 16:27:16.446210 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-8jlw4"] Jan 23 16:27:16 crc kubenswrapper[4718]: W0123 16:27:16.450610 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e5ee60b_7363_4a74_b69d_1f4f474166e0.slice/crio-b63e7f0960e2717151f04117f694ab988fc5fa24816e24fa779f381540267743 WatchSource:0}: Error finding container b63e7f0960e2717151f04117f694ab988fc5fa24816e24fa779f381540267743: Status 404 returned error can't find the container with id b63e7f0960e2717151f04117f694ab988fc5fa24816e24fa779f381540267743 Jan 23 16:27:16 crc kubenswrapper[4718]: I0123 16:27:16.499808 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-br6fl"] Jan 23 16:27:16 crc kubenswrapper[4718]: I0123 16:27:16.526121 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-8jlw4" event={"ID":"1e5ee60b-7363-4a74-b69d-1f4f474166e0","Type":"ContainerStarted","Data":"b63e7f0960e2717151f04117f694ab988fc5fa24816e24fa779f381540267743"} Jan 23 16:27:16 crc kubenswrapper[4718]: I0123 16:27:16.527317 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-br6fl" event={"ID":"f99e5457-16fb-453f-909c-a8364ffc0372","Type":"ContainerStarted","Data":"cd86b134b5cf859fb04d9b8a46142aef0861743d649c8765480426f6a7c69a95"} Jan 23 16:27:16 crc kubenswrapper[4718]: I0123 16:27:16.528193 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-sntwx" event={"ID":"57379fa4-b935-4095-a6c1-9e83709c5906","Type":"ContainerStarted","Data":"92ff646fba12fc03a90f21cd345cee360fee7d1acf9d03eda7ac11592785578a"} Jan 23 16:27:19 crc kubenswrapper[4718]: I0123 16:27:19.556894 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-sntwx" event={"ID":"57379fa4-b935-4095-a6c1-9e83709c5906","Type":"ContainerStarted","Data":"964dc3702fd1d7e0864c0731b0297a203fefdf7c765b6a0138c57e189a6f6980"} Jan 23 16:27:19 crc kubenswrapper[4718]: I0123 16:27:19.579571 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-sntwx" podStartSLOduration=2.365381144 podStartE2EDuration="4.579550128s" podCreationTimestamp="2026-01-23 16:27:15 +0000 UTC" firstStartedPulling="2026-01-23 16:27:16.331856521 +0000 UTC m=+637.479098512" lastFinishedPulling="2026-01-23 16:27:18.546025505 +0000 UTC m=+639.693267496" observedRunningTime="2026-01-23 16:27:19.578102537 +0000 UTC m=+640.725344588" watchObservedRunningTime="2026-01-23 16:27:19.579550128 +0000 UTC m=+640.726792139" Jan 23 16:27:21 crc kubenswrapper[4718]: I0123 16:27:21.573016 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-8jlw4" event={"ID":"1e5ee60b-7363-4a74-b69d-1f4f474166e0","Type":"ContainerStarted","Data":"1eaa5b902a85325bd263b644ff73d85d74f5f4c5ee33f782984ce74ae74b2e15"} Jan 23 16:27:21 crc kubenswrapper[4718]: I0123 16:27:21.575865 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-br6fl" event={"ID":"f99e5457-16fb-453f-909c-a8364ffc0372","Type":"ContainerStarted","Data":"4c026f5fe630ba5eb1e1e10e3b1f72eb988b04c80535d517f66532bd7484314a"} Jan 23 16:27:21 crc kubenswrapper[4718]: I0123 16:27:21.576313 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-br6fl" Jan 23 16:27:21 crc kubenswrapper[4718]: I0123 16:27:21.602564 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-8jlw4" podStartSLOduration=2.080901828 podStartE2EDuration="6.602543188s" podCreationTimestamp="2026-01-23 16:27:15 +0000 UTC" firstStartedPulling="2026-01-23 16:27:16.452198606 +0000 UTC m=+637.599440597" lastFinishedPulling="2026-01-23 16:27:20.973839956 +0000 UTC m=+642.121081957" observedRunningTime="2026-01-23 16:27:21.597674663 +0000 UTC m=+642.744916664" watchObservedRunningTime="2026-01-23 16:27:21.602543188 +0000 UTC m=+642.749785179" Jan 23 16:27:21 crc kubenswrapper[4718]: I0123 16:27:21.628757 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-br6fl" podStartSLOduration=2.198162926 podStartE2EDuration="6.628734986s" podCreationTimestamp="2026-01-23 16:27:15 +0000 UTC" firstStartedPulling="2026-01-23 16:27:16.509740475 +0000 UTC m=+637.656982466" lastFinishedPulling="2026-01-23 16:27:20.940312525 +0000 UTC m=+642.087554526" observedRunningTime="2026-01-23 16:27:21.623701086 +0000 UTC m=+642.770943087" watchObservedRunningTime="2026-01-23 16:27:21.628734986 +0000 UTC m=+642.775976977" Jan 23 16:27:26 crc kubenswrapper[4718]: I0123 16:27:26.013514 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-br6fl" Jan 23 16:27:28 crc kubenswrapper[4718]: I0123 16:27:28.875865 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:27:28 crc kubenswrapper[4718]: I0123 16:27:28.876782 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:27:28 crc kubenswrapper[4718]: I0123 16:27:28.876868 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:27:28 crc kubenswrapper[4718]: I0123 16:27:28.877975 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d6fc078511f90cfb7ae2b356622e186c6cc3d8dadc1f9dd98c3eb3e0635e278"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 16:27:28 crc kubenswrapper[4718]: I0123 16:27:28.878087 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://7d6fc078511f90cfb7ae2b356622e186c6cc3d8dadc1f9dd98c3eb3e0635e278" gracePeriod=600 Jan 23 16:27:29 crc kubenswrapper[4718]: I0123 16:27:29.645713 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="7d6fc078511f90cfb7ae2b356622e186c6cc3d8dadc1f9dd98c3eb3e0635e278" exitCode=0 Jan 23 16:27:29 crc kubenswrapper[4718]: I0123 16:27:29.645861 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"7d6fc078511f90cfb7ae2b356622e186c6cc3d8dadc1f9dd98c3eb3e0635e278"} Jan 23 16:27:29 crc kubenswrapper[4718]: I0123 16:27:29.646097 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"93a698fd9a68d5119c2aead8c4e3dde081f70d298b44f30d7bda86aad4caf6b2"} Jan 23 16:27:29 crc kubenswrapper[4718]: I0123 16:27:29.646129 4718 scope.go:117] "RemoveContainer" containerID="68ba7028c895a9368e5bfd080b533a77a260dd92bd96940d5d645457904a6833" Jan 23 16:27:49 crc kubenswrapper[4718]: I0123 16:27:49.687666 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4"] Jan 23 16:27:49 crc kubenswrapper[4718]: I0123 16:27:49.691721 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4" Jan 23 16:27:49 crc kubenswrapper[4718]: I0123 16:27:49.695768 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4"] Jan 23 16:27:49 crc kubenswrapper[4718]: I0123 16:27:49.696348 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 16:27:49 crc kubenswrapper[4718]: I0123 16:27:49.773840 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4ttg\" (UniqueName: \"kubernetes.io/projected/76c75fbc-0e43-4fac-81e9-06bf925c0a1e-kube-api-access-z4ttg\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4\" (UID: \"76c75fbc-0e43-4fac-81e9-06bf925c0a1e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4" Jan 23 16:27:49 crc kubenswrapper[4718]: I0123 16:27:49.773889 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76c75fbc-0e43-4fac-81e9-06bf925c0a1e-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4\" (UID: \"76c75fbc-0e43-4fac-81e9-06bf925c0a1e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4" Jan 23 16:27:49 crc kubenswrapper[4718]: I0123 16:27:49.773919 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76c75fbc-0e43-4fac-81e9-06bf925c0a1e-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4\" (UID: \"76c75fbc-0e43-4fac-81e9-06bf925c0a1e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4" Jan 23 16:27:49 crc kubenswrapper[4718]: I0123 16:27:49.875496 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4ttg\" (UniqueName: \"kubernetes.io/projected/76c75fbc-0e43-4fac-81e9-06bf925c0a1e-kube-api-access-z4ttg\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4\" (UID: \"76c75fbc-0e43-4fac-81e9-06bf925c0a1e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4" Jan 23 16:27:49 crc kubenswrapper[4718]: I0123 16:27:49.875547 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76c75fbc-0e43-4fac-81e9-06bf925c0a1e-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4\" (UID: \"76c75fbc-0e43-4fac-81e9-06bf925c0a1e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4" Jan 23 16:27:49 crc kubenswrapper[4718]: I0123 16:27:49.875579 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76c75fbc-0e43-4fac-81e9-06bf925c0a1e-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4\" (UID: \"76c75fbc-0e43-4fac-81e9-06bf925c0a1e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4" Jan 23 16:27:49 crc kubenswrapper[4718]: I0123 16:27:49.876227 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76c75fbc-0e43-4fac-81e9-06bf925c0a1e-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4\" (UID: \"76c75fbc-0e43-4fac-81e9-06bf925c0a1e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4" Jan 23 16:27:49 crc kubenswrapper[4718]: I0123 16:27:49.876365 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76c75fbc-0e43-4fac-81e9-06bf925c0a1e-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4\" (UID: \"76c75fbc-0e43-4fac-81e9-06bf925c0a1e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4" Jan 23 16:27:49 crc kubenswrapper[4718]: I0123 16:27:49.898065 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4ttg\" (UniqueName: \"kubernetes.io/projected/76c75fbc-0e43-4fac-81e9-06bf925c0a1e-kube-api-access-z4ttg\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4\" (UID: \"76c75fbc-0e43-4fac-81e9-06bf925c0a1e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4" Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.014485 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4" Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.056463 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2"] Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.058225 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2" Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.078359 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2"] Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.078654 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78a945c6-373e-4c80-acb5-1dd5a14c2be6-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2\" (UID: \"78a945c6-373e-4c80-acb5-1dd5a14c2be6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2" Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.078738 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-647pp\" (UniqueName: \"kubernetes.io/projected/78a945c6-373e-4c80-acb5-1dd5a14c2be6-kube-api-access-647pp\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2\" (UID: \"78a945c6-373e-4c80-acb5-1dd5a14c2be6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2" Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.078770 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78a945c6-373e-4c80-acb5-1dd5a14c2be6-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2\" (UID: \"78a945c6-373e-4c80-acb5-1dd5a14c2be6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2" Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.183615 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-647pp\" (UniqueName: \"kubernetes.io/projected/78a945c6-373e-4c80-acb5-1dd5a14c2be6-kube-api-access-647pp\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2\" (UID: \"78a945c6-373e-4c80-acb5-1dd5a14c2be6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2" Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.183791 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78a945c6-373e-4c80-acb5-1dd5a14c2be6-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2\" (UID: \"78a945c6-373e-4c80-acb5-1dd5a14c2be6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2" Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.183879 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78a945c6-373e-4c80-acb5-1dd5a14c2be6-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2\" (UID: \"78a945c6-373e-4c80-acb5-1dd5a14c2be6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2" Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.184357 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78a945c6-373e-4c80-acb5-1dd5a14c2be6-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2\" (UID: \"78a945c6-373e-4c80-acb5-1dd5a14c2be6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2" Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.185002 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78a945c6-373e-4c80-acb5-1dd5a14c2be6-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2\" (UID: \"78a945c6-373e-4c80-acb5-1dd5a14c2be6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2" Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.224155 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-647pp\" (UniqueName: \"kubernetes.io/projected/78a945c6-373e-4c80-acb5-1dd5a14c2be6-kube-api-access-647pp\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2\" (UID: \"78a945c6-373e-4c80-acb5-1dd5a14c2be6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2" Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.464158 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2" Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.553894 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4"] Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.733616 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2"] Jan 23 16:27:50 crc kubenswrapper[4718]: W0123 16:27:50.741290 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78a945c6_373e_4c80_acb5_1dd5a14c2be6.slice/crio-59d26ae1eabfe943624a0e2fc7de2d948654c68ec4a7135700bd255d65f3df36 WatchSource:0}: Error finding container 59d26ae1eabfe943624a0e2fc7de2d948654c68ec4a7135700bd255d65f3df36: Status 404 returned error can't find the container with id 59d26ae1eabfe943624a0e2fc7de2d948654c68ec4a7135700bd255d65f3df36 Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.816545 4718 generic.go:334] "Generic (PLEG): container finished" podID="76c75fbc-0e43-4fac-81e9-06bf925c0a1e" containerID="84dc48756aa461baf4d66bbebb1716d7d59aaa6752ea897fa33c56d745cca464" exitCode=0 Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.816652 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4" event={"ID":"76c75fbc-0e43-4fac-81e9-06bf925c0a1e","Type":"ContainerDied","Data":"84dc48756aa461baf4d66bbebb1716d7d59aaa6752ea897fa33c56d745cca464"} Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.816691 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4" event={"ID":"76c75fbc-0e43-4fac-81e9-06bf925c0a1e","Type":"ContainerStarted","Data":"e75cfef8b387cae409a51deb57f08ea70c320afbcafc2a71c8608447bc90940c"} Jan 23 16:27:50 crc kubenswrapper[4718]: I0123 16:27:50.820424 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2" event={"ID":"78a945c6-373e-4c80-acb5-1dd5a14c2be6","Type":"ContainerStarted","Data":"59d26ae1eabfe943624a0e2fc7de2d948654c68ec4a7135700bd255d65f3df36"} Jan 23 16:27:51 crc kubenswrapper[4718]: I0123 16:27:51.834545 4718 generic.go:334] "Generic (PLEG): container finished" podID="78a945c6-373e-4c80-acb5-1dd5a14c2be6" containerID="3f13e7534e6e0464970037adfb662c10e3bc4d04926c20f34058d83cc4e1a5b3" exitCode=0 Jan 23 16:27:51 crc kubenswrapper[4718]: I0123 16:27:51.834716 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2" event={"ID":"78a945c6-373e-4c80-acb5-1dd5a14c2be6","Type":"ContainerDied","Data":"3f13e7534e6e0464970037adfb662c10e3bc4d04926c20f34058d83cc4e1a5b3"} Jan 23 16:27:53 crc kubenswrapper[4718]: I0123 16:27:53.857200 4718 generic.go:334] "Generic (PLEG): container finished" podID="76c75fbc-0e43-4fac-81e9-06bf925c0a1e" containerID="04484377e5a1e207bfb2261ae035e12806bdfd83baf216142faa9697984c6200" exitCode=0 Jan 23 16:27:53 crc kubenswrapper[4718]: I0123 16:27:53.857287 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4" event={"ID":"76c75fbc-0e43-4fac-81e9-06bf925c0a1e","Type":"ContainerDied","Data":"04484377e5a1e207bfb2261ae035e12806bdfd83baf216142faa9697984c6200"} Jan 23 16:27:54 crc kubenswrapper[4718]: I0123 16:27:54.867897 4718 generic.go:334] "Generic (PLEG): container finished" podID="76c75fbc-0e43-4fac-81e9-06bf925c0a1e" containerID="87dc3adacd2c046efb553266486373a575f55bd3c7bdb049db5ae7f52b347316" exitCode=0 Jan 23 16:27:54 crc kubenswrapper[4718]: I0123 16:27:54.868026 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4" event={"ID":"76c75fbc-0e43-4fac-81e9-06bf925c0a1e","Type":"ContainerDied","Data":"87dc3adacd2c046efb553266486373a575f55bd3c7bdb049db5ae7f52b347316"} Jan 23 16:27:54 crc kubenswrapper[4718]: I0123 16:27:54.870844 4718 generic.go:334] "Generic (PLEG): container finished" podID="78a945c6-373e-4c80-acb5-1dd5a14c2be6" containerID="f9f2e3c1d52ff1a4d286d26e4b1307b75d3fab17504f59be176b17fe73fd1abc" exitCode=0 Jan 23 16:27:54 crc kubenswrapper[4718]: I0123 16:27:54.870920 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2" event={"ID":"78a945c6-373e-4c80-acb5-1dd5a14c2be6","Type":"ContainerDied","Data":"f9f2e3c1d52ff1a4d286d26e4b1307b75d3fab17504f59be176b17fe73fd1abc"} Jan 23 16:27:55 crc kubenswrapper[4718]: I0123 16:27:55.881174 4718 generic.go:334] "Generic (PLEG): container finished" podID="78a945c6-373e-4c80-acb5-1dd5a14c2be6" containerID="4c1e1e5a39a3315baaa3279857e85ada2676c12da1c217b87e8b04b1851fdf4a" exitCode=0 Jan 23 16:27:55 crc kubenswrapper[4718]: I0123 16:27:55.881260 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2" event={"ID":"78a945c6-373e-4c80-acb5-1dd5a14c2be6","Type":"ContainerDied","Data":"4c1e1e5a39a3315baaa3279857e85ada2676c12da1c217b87e8b04b1851fdf4a"} Jan 23 16:27:56 crc kubenswrapper[4718]: I0123 16:27:56.143750 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4" Jan 23 16:27:56 crc kubenswrapper[4718]: I0123 16:27:56.199116 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76c75fbc-0e43-4fac-81e9-06bf925c0a1e-bundle\") pod \"76c75fbc-0e43-4fac-81e9-06bf925c0a1e\" (UID: \"76c75fbc-0e43-4fac-81e9-06bf925c0a1e\") " Jan 23 16:27:56 crc kubenswrapper[4718]: I0123 16:27:56.199164 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4ttg\" (UniqueName: \"kubernetes.io/projected/76c75fbc-0e43-4fac-81e9-06bf925c0a1e-kube-api-access-z4ttg\") pod \"76c75fbc-0e43-4fac-81e9-06bf925c0a1e\" (UID: \"76c75fbc-0e43-4fac-81e9-06bf925c0a1e\") " Jan 23 16:27:56 crc kubenswrapper[4718]: I0123 16:27:56.199282 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76c75fbc-0e43-4fac-81e9-06bf925c0a1e-util\") pod \"76c75fbc-0e43-4fac-81e9-06bf925c0a1e\" (UID: \"76c75fbc-0e43-4fac-81e9-06bf925c0a1e\") " Jan 23 16:27:56 crc kubenswrapper[4718]: I0123 16:27:56.200279 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76c75fbc-0e43-4fac-81e9-06bf925c0a1e-bundle" (OuterVolumeSpecName: "bundle") pod "76c75fbc-0e43-4fac-81e9-06bf925c0a1e" (UID: "76c75fbc-0e43-4fac-81e9-06bf925c0a1e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:27:56 crc kubenswrapper[4718]: I0123 16:27:56.201027 4718 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76c75fbc-0e43-4fac-81e9-06bf925c0a1e-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:27:56 crc kubenswrapper[4718]: I0123 16:27:56.207753 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76c75fbc-0e43-4fac-81e9-06bf925c0a1e-kube-api-access-z4ttg" (OuterVolumeSpecName: "kube-api-access-z4ttg") pod "76c75fbc-0e43-4fac-81e9-06bf925c0a1e" (UID: "76c75fbc-0e43-4fac-81e9-06bf925c0a1e"). InnerVolumeSpecName "kube-api-access-z4ttg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:27:56 crc kubenswrapper[4718]: I0123 16:27:56.301957 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4ttg\" (UniqueName: \"kubernetes.io/projected/76c75fbc-0e43-4fac-81e9-06bf925c0a1e-kube-api-access-z4ttg\") on node \"crc\" DevicePath \"\"" Jan 23 16:27:56 crc kubenswrapper[4718]: I0123 16:27:56.642407 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76c75fbc-0e43-4fac-81e9-06bf925c0a1e-util" (OuterVolumeSpecName: "util") pod "76c75fbc-0e43-4fac-81e9-06bf925c0a1e" (UID: "76c75fbc-0e43-4fac-81e9-06bf925c0a1e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:27:56 crc kubenswrapper[4718]: I0123 16:27:56.708747 4718 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76c75fbc-0e43-4fac-81e9-06bf925c0a1e-util\") on node \"crc\" DevicePath \"\"" Jan 23 16:27:56 crc kubenswrapper[4718]: I0123 16:27:56.893512 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4" event={"ID":"76c75fbc-0e43-4fac-81e9-06bf925c0a1e","Type":"ContainerDied","Data":"e75cfef8b387cae409a51deb57f08ea70c320afbcafc2a71c8608447bc90940c"} Jan 23 16:27:56 crc kubenswrapper[4718]: I0123 16:27:56.893586 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e75cfef8b387cae409a51deb57f08ea70c320afbcafc2a71c8608447bc90940c" Jan 23 16:27:56 crc kubenswrapper[4718]: I0123 16:27:56.893545 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4" Jan 23 16:27:57 crc kubenswrapper[4718]: I0123 16:27:57.220883 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2" Jan 23 16:27:57 crc kubenswrapper[4718]: I0123 16:27:57.319951 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-647pp\" (UniqueName: \"kubernetes.io/projected/78a945c6-373e-4c80-acb5-1dd5a14c2be6-kube-api-access-647pp\") pod \"78a945c6-373e-4c80-acb5-1dd5a14c2be6\" (UID: \"78a945c6-373e-4c80-acb5-1dd5a14c2be6\") " Jan 23 16:27:57 crc kubenswrapper[4718]: I0123 16:27:57.320176 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78a945c6-373e-4c80-acb5-1dd5a14c2be6-bundle\") pod \"78a945c6-373e-4c80-acb5-1dd5a14c2be6\" (UID: \"78a945c6-373e-4c80-acb5-1dd5a14c2be6\") " Jan 23 16:27:57 crc kubenswrapper[4718]: I0123 16:27:57.320208 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78a945c6-373e-4c80-acb5-1dd5a14c2be6-util\") pod \"78a945c6-373e-4c80-acb5-1dd5a14c2be6\" (UID: \"78a945c6-373e-4c80-acb5-1dd5a14c2be6\") " Jan 23 16:27:57 crc kubenswrapper[4718]: I0123 16:27:57.322027 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78a945c6-373e-4c80-acb5-1dd5a14c2be6-bundle" (OuterVolumeSpecName: "bundle") pod "78a945c6-373e-4c80-acb5-1dd5a14c2be6" (UID: "78a945c6-373e-4c80-acb5-1dd5a14c2be6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:27:57 crc kubenswrapper[4718]: I0123 16:27:57.327943 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78a945c6-373e-4c80-acb5-1dd5a14c2be6-kube-api-access-647pp" (OuterVolumeSpecName: "kube-api-access-647pp") pod "78a945c6-373e-4c80-acb5-1dd5a14c2be6" (UID: "78a945c6-373e-4c80-acb5-1dd5a14c2be6"). InnerVolumeSpecName "kube-api-access-647pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:27:57 crc kubenswrapper[4718]: I0123 16:27:57.348212 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78a945c6-373e-4c80-acb5-1dd5a14c2be6-util" (OuterVolumeSpecName: "util") pod "78a945c6-373e-4c80-acb5-1dd5a14c2be6" (UID: "78a945c6-373e-4c80-acb5-1dd5a14c2be6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:27:57 crc kubenswrapper[4718]: I0123 16:27:57.422565 4718 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78a945c6-373e-4c80-acb5-1dd5a14c2be6-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:27:57 crc kubenswrapper[4718]: I0123 16:27:57.423241 4718 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78a945c6-373e-4c80-acb5-1dd5a14c2be6-util\") on node \"crc\" DevicePath \"\"" Jan 23 16:27:57 crc kubenswrapper[4718]: I0123 16:27:57.423315 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-647pp\" (UniqueName: \"kubernetes.io/projected/78a945c6-373e-4c80-acb5-1dd5a14c2be6-kube-api-access-647pp\") on node \"crc\" DevicePath \"\"" Jan 23 16:27:57 crc kubenswrapper[4718]: I0123 16:27:57.903020 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2" event={"ID":"78a945c6-373e-4c80-acb5-1dd5a14c2be6","Type":"ContainerDied","Data":"59d26ae1eabfe943624a0e2fc7de2d948654c68ec4a7135700bd255d65f3df36"} Jan 23 16:27:57 crc kubenswrapper[4718]: I0123 16:27:57.903071 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59d26ae1eabfe943624a0e2fc7de2d948654c68ec4a7135700bd255d65f3df36" Jan 23 16:27:57 crc kubenswrapper[4718]: I0123 16:27:57.903114 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.507422 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6"] Jan 23 16:28:06 crc kubenswrapper[4718]: E0123 16:28:06.508808 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76c75fbc-0e43-4fac-81e9-06bf925c0a1e" containerName="util" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.508830 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c75fbc-0e43-4fac-81e9-06bf925c0a1e" containerName="util" Jan 23 16:28:06 crc kubenswrapper[4718]: E0123 16:28:06.508867 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a945c6-373e-4c80-acb5-1dd5a14c2be6" containerName="pull" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.508882 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a945c6-373e-4c80-acb5-1dd5a14c2be6" containerName="pull" Jan 23 16:28:06 crc kubenswrapper[4718]: E0123 16:28:06.508897 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a945c6-373e-4c80-acb5-1dd5a14c2be6" containerName="util" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.508907 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a945c6-373e-4c80-acb5-1dd5a14c2be6" containerName="util" Jan 23 16:28:06 crc kubenswrapper[4718]: E0123 16:28:06.508920 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76c75fbc-0e43-4fac-81e9-06bf925c0a1e" containerName="extract" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.508927 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c75fbc-0e43-4fac-81e9-06bf925c0a1e" containerName="extract" Jan 23 16:28:06 crc kubenswrapper[4718]: E0123 16:28:06.508944 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a945c6-373e-4c80-acb5-1dd5a14c2be6" containerName="extract" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.508951 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a945c6-373e-4c80-acb5-1dd5a14c2be6" containerName="extract" Jan 23 16:28:06 crc kubenswrapper[4718]: E0123 16:28:06.508964 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76c75fbc-0e43-4fac-81e9-06bf925c0a1e" containerName="pull" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.508971 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c75fbc-0e43-4fac-81e9-06bf925c0a1e" containerName="pull" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.509133 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="78a945c6-373e-4c80-acb5-1dd5a14c2be6" containerName="extract" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.509158 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="76c75fbc-0e43-4fac-81e9-06bf925c0a1e" containerName="extract" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.510151 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.514471 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-t4rvc" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.515524 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.515908 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.516181 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.516370 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.516496 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.534489 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6"] Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.622368 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3-webhook-cert\") pod \"loki-operator-controller-manager-54c9dfbc84-hsbh6\" (UID: \"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.622465 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3-apiservice-cert\") pod \"loki-operator-controller-manager-54c9dfbc84-hsbh6\" (UID: \"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.622559 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-54c9dfbc84-hsbh6\" (UID: \"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.622593 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3-manager-config\") pod \"loki-operator-controller-manager-54c9dfbc84-hsbh6\" (UID: \"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.622748 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqd67\" (UniqueName: \"kubernetes.io/projected/1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3-kube-api-access-nqd67\") pod \"loki-operator-controller-manager-54c9dfbc84-hsbh6\" (UID: \"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.724273 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-54c9dfbc84-hsbh6\" (UID: \"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.724334 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3-manager-config\") pod \"loki-operator-controller-manager-54c9dfbc84-hsbh6\" (UID: \"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.724409 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqd67\" (UniqueName: \"kubernetes.io/projected/1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3-kube-api-access-nqd67\") pod \"loki-operator-controller-manager-54c9dfbc84-hsbh6\" (UID: \"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.724463 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3-webhook-cert\") pod \"loki-operator-controller-manager-54c9dfbc84-hsbh6\" (UID: \"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.724499 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3-apiservice-cert\") pod \"loki-operator-controller-manager-54c9dfbc84-hsbh6\" (UID: \"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.726151 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3-manager-config\") pod \"loki-operator-controller-manager-54c9dfbc84-hsbh6\" (UID: \"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.731423 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3-apiservice-cert\") pod \"loki-operator-controller-manager-54c9dfbc84-hsbh6\" (UID: \"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.733553 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3-webhook-cert\") pod \"loki-operator-controller-manager-54c9dfbc84-hsbh6\" (UID: \"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.751804 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-54c9dfbc84-hsbh6\" (UID: \"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.771218 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqd67\" (UniqueName: \"kubernetes.io/projected/1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3-kube-api-access-nqd67\") pod \"loki-operator-controller-manager-54c9dfbc84-hsbh6\" (UID: \"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:06 crc kubenswrapper[4718]: I0123 16:28:06.827553 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:07 crc kubenswrapper[4718]: I0123 16:28:07.320840 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6"] Jan 23 16:28:07 crc kubenswrapper[4718]: W0123 16:28:07.333583 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1615a4d2_6fc4_47ce_8f53_8dd0acc7eba3.slice/crio-97d795060eba36ab07d625ef4a403689b20d4dda71ad8a8ed5fa363e6160ba81 WatchSource:0}: Error finding container 97d795060eba36ab07d625ef4a403689b20d4dda71ad8a8ed5fa363e6160ba81: Status 404 returned error can't find the container with id 97d795060eba36ab07d625ef4a403689b20d4dda71ad8a8ed5fa363e6160ba81 Jan 23 16:28:07 crc kubenswrapper[4718]: I0123 16:28:07.987491 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" event={"ID":"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3","Type":"ContainerStarted","Data":"97d795060eba36ab07d625ef4a403689b20d4dda71ad8a8ed5fa363e6160ba81"} Jan 23 16:28:11 crc kubenswrapper[4718]: I0123 16:28:11.451767 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-6gvnk"] Jan 23 16:28:11 crc kubenswrapper[4718]: I0123 16:28:11.452946 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-6gvnk" Jan 23 16:28:11 crc kubenswrapper[4718]: I0123 16:28:11.456730 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Jan 23 16:28:11 crc kubenswrapper[4718]: I0123 16:28:11.456730 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-mvs44" Jan 23 16:28:11 crc kubenswrapper[4718]: I0123 16:28:11.458895 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Jan 23 16:28:11 crc kubenswrapper[4718]: I0123 16:28:11.490715 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-6gvnk"] Jan 23 16:28:11 crc kubenswrapper[4718]: I0123 16:28:11.620436 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqzw6\" (UniqueName: \"kubernetes.io/projected/cfab1ac5-2db7-41ea-8dba-31bdc0e1b22a-kube-api-access-pqzw6\") pod \"cluster-logging-operator-79cf69ddc8-6gvnk\" (UID: \"cfab1ac5-2db7-41ea-8dba-31bdc0e1b22a\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-6gvnk" Jan 23 16:28:11 crc kubenswrapper[4718]: I0123 16:28:11.722332 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqzw6\" (UniqueName: \"kubernetes.io/projected/cfab1ac5-2db7-41ea-8dba-31bdc0e1b22a-kube-api-access-pqzw6\") pod \"cluster-logging-operator-79cf69ddc8-6gvnk\" (UID: \"cfab1ac5-2db7-41ea-8dba-31bdc0e1b22a\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-6gvnk" Jan 23 16:28:11 crc kubenswrapper[4718]: I0123 16:28:11.754911 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqzw6\" (UniqueName: \"kubernetes.io/projected/cfab1ac5-2db7-41ea-8dba-31bdc0e1b22a-kube-api-access-pqzw6\") pod \"cluster-logging-operator-79cf69ddc8-6gvnk\" (UID: \"cfab1ac5-2db7-41ea-8dba-31bdc0e1b22a\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-6gvnk" Jan 23 16:28:11 crc kubenswrapper[4718]: I0123 16:28:11.775592 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-6gvnk" Jan 23 16:28:12 crc kubenswrapper[4718]: I0123 16:28:12.863640 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-6gvnk"] Jan 23 16:28:12 crc kubenswrapper[4718]: W0123 16:28:12.879900 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfab1ac5_2db7_41ea_8dba_31bdc0e1b22a.slice/crio-d74a0c2fd83bdfa1a712816c363d8529b6217d8974a7e0e5c6a73765a8251b2a WatchSource:0}: Error finding container d74a0c2fd83bdfa1a712816c363d8529b6217d8974a7e0e5c6a73765a8251b2a: Status 404 returned error can't find the container with id d74a0c2fd83bdfa1a712816c363d8529b6217d8974a7e0e5c6a73765a8251b2a Jan 23 16:28:13 crc kubenswrapper[4718]: I0123 16:28:13.041848 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-6gvnk" event={"ID":"cfab1ac5-2db7-41ea-8dba-31bdc0e1b22a","Type":"ContainerStarted","Data":"d74a0c2fd83bdfa1a712816c363d8529b6217d8974a7e0e5c6a73765a8251b2a"} Jan 23 16:28:13 crc kubenswrapper[4718]: I0123 16:28:13.043653 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" event={"ID":"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3","Type":"ContainerStarted","Data":"d113c776820c66dbe9feace81178dc79aee86971190f82d2d69c16b13f966ad6"} Jan 23 16:28:22 crc kubenswrapper[4718]: I0123 16:28:22.119683 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" event={"ID":"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3","Type":"ContainerStarted","Data":"25d0718b40ceda9dd7e9803a0f4234654dd6188568d076eca1395b712abea363"} Jan 23 16:28:22 crc kubenswrapper[4718]: I0123 16:28:22.120953 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:22 crc kubenswrapper[4718]: I0123 16:28:22.123036 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-6gvnk" event={"ID":"cfab1ac5-2db7-41ea-8dba-31bdc0e1b22a","Type":"ContainerStarted","Data":"8f776fefd04d4a2bf8cdb0028d978a4d8b254c38535564d970c22f5763740177"} Jan 23 16:28:22 crc kubenswrapper[4718]: I0123 16:28:22.156749 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 16:28:22 crc kubenswrapper[4718]: I0123 16:28:22.161321 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" podStartSLOduration=1.8337122049999999 podStartE2EDuration="16.161298834s" podCreationTimestamp="2026-01-23 16:28:06 +0000 UTC" firstStartedPulling="2026-01-23 16:28:07.339770202 +0000 UTC m=+688.487012193" lastFinishedPulling="2026-01-23 16:28:21.667356831 +0000 UTC m=+702.814598822" observedRunningTime="2026-01-23 16:28:22.157237524 +0000 UTC m=+703.304479515" watchObservedRunningTime="2026-01-23 16:28:22.161298834 +0000 UTC m=+703.308540825" Jan 23 16:28:22 crc kubenswrapper[4718]: I0123 16:28:22.240176 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-6gvnk" podStartSLOduration=2.436109229 podStartE2EDuration="11.24015574s" podCreationTimestamp="2026-01-23 16:28:11 +0000 UTC" firstStartedPulling="2026-01-23 16:28:12.88362161 +0000 UTC m=+694.030863601" lastFinishedPulling="2026-01-23 16:28:21.687668121 +0000 UTC m=+702.834910112" observedRunningTime="2026-01-23 16:28:22.231363272 +0000 UTC m=+703.378605263" watchObservedRunningTime="2026-01-23 16:28:22.24015574 +0000 UTC m=+703.387397731" Jan 23 16:28:26 crc kubenswrapper[4718]: I0123 16:28:26.786476 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 23 16:28:26 crc kubenswrapper[4718]: I0123 16:28:26.788722 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 23 16:28:26 crc kubenswrapper[4718]: I0123 16:28:26.796576 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 23 16:28:26 crc kubenswrapper[4718]: I0123 16:28:26.797582 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 23 16:28:26 crc kubenswrapper[4718]: I0123 16:28:26.801296 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 23 16:28:26 crc kubenswrapper[4718]: I0123 16:28:26.801509 4718 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-l8m2v" Jan 23 16:28:26 crc kubenswrapper[4718]: I0123 16:28:26.937618 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpg7q\" (UniqueName: \"kubernetes.io/projected/330caac5-50fb-469e-9591-3b11f0243335-kube-api-access-wpg7q\") pod \"minio\" (UID: \"330caac5-50fb-469e-9591-3b11f0243335\") " pod="minio-dev/minio" Jan 23 16:28:26 crc kubenswrapper[4718]: I0123 16:28:26.938044 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8ec07311-0cc2-4ab3-b2dc-b10c9c877023\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ec07311-0cc2-4ab3-b2dc-b10c9c877023\") pod \"minio\" (UID: \"330caac5-50fb-469e-9591-3b11f0243335\") " pod="minio-dev/minio" Jan 23 16:28:27 crc kubenswrapper[4718]: I0123 16:28:27.039868 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8ec07311-0cc2-4ab3-b2dc-b10c9c877023\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ec07311-0cc2-4ab3-b2dc-b10c9c877023\") pod \"minio\" (UID: \"330caac5-50fb-469e-9591-3b11f0243335\") " pod="minio-dev/minio" Jan 23 16:28:27 crc kubenswrapper[4718]: I0123 16:28:27.039941 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpg7q\" (UniqueName: \"kubernetes.io/projected/330caac5-50fb-469e-9591-3b11f0243335-kube-api-access-wpg7q\") pod \"minio\" (UID: \"330caac5-50fb-469e-9591-3b11f0243335\") " pod="minio-dev/minio" Jan 23 16:28:27 crc kubenswrapper[4718]: I0123 16:28:27.043677 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:28:27 crc kubenswrapper[4718]: I0123 16:28:27.043727 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8ec07311-0cc2-4ab3-b2dc-b10c9c877023\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ec07311-0cc2-4ab3-b2dc-b10c9c877023\") pod \"minio\" (UID: \"330caac5-50fb-469e-9591-3b11f0243335\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1aa203eb00db04429d4519e9b9ca727a1e5d78a14c8924d6ed63cca6453e345/globalmount\"" pod="minio-dev/minio" Jan 23 16:28:27 crc kubenswrapper[4718]: I0123 16:28:27.064994 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpg7q\" (UniqueName: \"kubernetes.io/projected/330caac5-50fb-469e-9591-3b11f0243335-kube-api-access-wpg7q\") pod \"minio\" (UID: \"330caac5-50fb-469e-9591-3b11f0243335\") " pod="minio-dev/minio" Jan 23 16:28:27 crc kubenswrapper[4718]: I0123 16:28:27.078473 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8ec07311-0cc2-4ab3-b2dc-b10c9c877023\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ec07311-0cc2-4ab3-b2dc-b10c9c877023\") pod \"minio\" (UID: \"330caac5-50fb-469e-9591-3b11f0243335\") " pod="minio-dev/minio" Jan 23 16:28:27 crc kubenswrapper[4718]: I0123 16:28:27.104272 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 23 16:28:27 crc kubenswrapper[4718]: I0123 16:28:27.593036 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 23 16:28:27 crc kubenswrapper[4718]: W0123 16:28:27.601138 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod330caac5_50fb_469e_9591_3b11f0243335.slice/crio-51a54f78d18c4c00e2ef9f3a5cab77d226bb2845c1090c3d9d129d73af657e24 WatchSource:0}: Error finding container 51a54f78d18c4c00e2ef9f3a5cab77d226bb2845c1090c3d9d129d73af657e24: Status 404 returned error can't find the container with id 51a54f78d18c4c00e2ef9f3a5cab77d226bb2845c1090c3d9d129d73af657e24 Jan 23 16:28:28 crc kubenswrapper[4718]: I0123 16:28:28.170313 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"330caac5-50fb-469e-9591-3b11f0243335","Type":"ContainerStarted","Data":"51a54f78d18c4c00e2ef9f3a5cab77d226bb2845c1090c3d9d129d73af657e24"} Jan 23 16:28:32 crc kubenswrapper[4718]: I0123 16:28:32.235141 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"330caac5-50fb-469e-9591-3b11f0243335","Type":"ContainerStarted","Data":"18f4f355c7a2ab1997f5f6c24a921db985b657e989cbb9b2c2a630e233cd5328"} Jan 23 16:28:32 crc kubenswrapper[4718]: I0123 16:28:32.252123 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.335182233 podStartE2EDuration="8.25210246s" podCreationTimestamp="2026-01-23 16:28:24 +0000 UTC" firstStartedPulling="2026-01-23 16:28:27.604871986 +0000 UTC m=+708.752113977" lastFinishedPulling="2026-01-23 16:28:31.521792213 +0000 UTC m=+712.669034204" observedRunningTime="2026-01-23 16:28:32.250131397 +0000 UTC m=+713.397373398" watchObservedRunningTime="2026-01-23 16:28:32.25210246 +0000 UTC m=+713.399344441" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.096025 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn"] Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.100120 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.104392 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-d7zc5" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.104999 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.105248 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.105392 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.105563 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.112654 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn"] Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.249217 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-2vjw9"] Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.250159 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.254094 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.254389 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.254098 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.258552 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2309723-2af5-455a-8f21-41e08e80d045-config\") pod \"logging-loki-distributor-5f678c8dd6-tpqfn\" (UID: \"c2309723-2af5-455a-8f21-41e08e80d045\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.258621 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/c2309723-2af5-455a-8f21-41e08e80d045-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-tpqfn\" (UID: \"c2309723-2af5-455a-8f21-41e08e80d045\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.258710 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c2309723-2af5-455a-8f21-41e08e80d045-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-tpqfn\" (UID: \"c2309723-2af5-455a-8f21-41e08e80d045\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.258727 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk78z\" (UniqueName: \"kubernetes.io/projected/c2309723-2af5-455a-8f21-41e08e80d045-kube-api-access-sk78z\") pod \"logging-loki-distributor-5f678c8dd6-tpqfn\" (UID: \"c2309723-2af5-455a-8f21-41e08e80d045\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.258756 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2309723-2af5-455a-8f21-41e08e80d045-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-tpqfn\" (UID: \"c2309723-2af5-455a-8f21-41e08e80d045\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.275402 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-2vjw9"] Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.318383 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h"] Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.319569 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.322795 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.323066 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.336556 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h"] Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.362344 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pdxj\" (UniqueName: \"kubernetes.io/projected/96cdf9bc-4893-4918-94e9-a23212e8ec5c-kube-api-access-6pdxj\") pod \"logging-loki-query-frontend-69d9546745-5cw9h\" (UID: \"96cdf9bc-4893-4918-94e9-a23212e8ec5c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.362414 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c2309723-2af5-455a-8f21-41e08e80d045-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-tpqfn\" (UID: \"c2309723-2af5-455a-8f21-41e08e80d045\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.362439 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk78z\" (UniqueName: \"kubernetes.io/projected/c2309723-2af5-455a-8f21-41e08e80d045-kube-api-access-sk78z\") pod \"logging-loki-distributor-5f678c8dd6-tpqfn\" (UID: \"c2309723-2af5-455a-8f21-41e08e80d045\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.362465 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/03861098-f572-4ace-ab3b-7fddb749da7d-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-2vjw9\" (UID: \"03861098-f572-4ace-ab3b-7fddb749da7d\") " pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.362492 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2309723-2af5-455a-8f21-41e08e80d045-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-tpqfn\" (UID: \"c2309723-2af5-455a-8f21-41e08e80d045\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.362509 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z8rj\" (UniqueName: \"kubernetes.io/projected/03861098-f572-4ace-ab3b-7fddb749da7d-kube-api-access-7z8rj\") pod \"logging-loki-querier-76788598db-2vjw9\" (UID: \"03861098-f572-4ace-ab3b-7fddb749da7d\") " pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.362530 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/96cdf9bc-4893-4918-94e9-a23212e8ec5c-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-5cw9h\" (UID: \"96cdf9bc-4893-4918-94e9-a23212e8ec5c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.362549 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03861098-f572-4ace-ab3b-7fddb749da7d-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-2vjw9\" (UID: \"03861098-f572-4ace-ab3b-7fddb749da7d\") " pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.362567 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/03861098-f572-4ace-ab3b-7fddb749da7d-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-2vjw9\" (UID: \"03861098-f572-4ace-ab3b-7fddb749da7d\") " pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.362587 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/03861098-f572-4ace-ab3b-7fddb749da7d-logging-loki-s3\") pod \"logging-loki-querier-76788598db-2vjw9\" (UID: \"03861098-f572-4ace-ab3b-7fddb749da7d\") " pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.362624 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03861098-f572-4ace-ab3b-7fddb749da7d-config\") pod \"logging-loki-querier-76788598db-2vjw9\" (UID: \"03861098-f572-4ace-ab3b-7fddb749da7d\") " pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.362666 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/96cdf9bc-4893-4918-94e9-a23212e8ec5c-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-5cw9h\" (UID: \"96cdf9bc-4893-4918-94e9-a23212e8ec5c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.362693 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2309723-2af5-455a-8f21-41e08e80d045-config\") pod \"logging-loki-distributor-5f678c8dd6-tpqfn\" (UID: \"c2309723-2af5-455a-8f21-41e08e80d045\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.362720 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96cdf9bc-4893-4918-94e9-a23212e8ec5c-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-5cw9h\" (UID: \"96cdf9bc-4893-4918-94e9-a23212e8ec5c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.362746 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/c2309723-2af5-455a-8f21-41e08e80d045-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-tpqfn\" (UID: \"c2309723-2af5-455a-8f21-41e08e80d045\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.362770 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96cdf9bc-4893-4918-94e9-a23212e8ec5c-config\") pod \"logging-loki-query-frontend-69d9546745-5cw9h\" (UID: \"96cdf9bc-4893-4918-94e9-a23212e8ec5c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.365007 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2309723-2af5-455a-8f21-41e08e80d045-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-tpqfn\" (UID: \"c2309723-2af5-455a-8f21-41e08e80d045\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.365099 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2309723-2af5-455a-8f21-41e08e80d045-config\") pod \"logging-loki-distributor-5f678c8dd6-tpqfn\" (UID: \"c2309723-2af5-455a-8f21-41e08e80d045\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.370312 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c2309723-2af5-455a-8f21-41e08e80d045-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-tpqfn\" (UID: \"c2309723-2af5-455a-8f21-41e08e80d045\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.370871 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/c2309723-2af5-455a-8f21-41e08e80d045-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-tpqfn\" (UID: \"c2309723-2af5-455a-8f21-41e08e80d045\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.386650 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk78z\" (UniqueName: \"kubernetes.io/projected/c2309723-2af5-455a-8f21-41e08e80d045-kube-api-access-sk78z\") pod \"logging-loki-distributor-5f678c8dd6-tpqfn\" (UID: \"c2309723-2af5-455a-8f21-41e08e80d045\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.418704 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.458165 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5f48ff8847-th7vf"] Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.459508 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.464657 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/96cdf9bc-4893-4918-94e9-a23212e8ec5c-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-5cw9h\" (UID: \"96cdf9bc-4893-4918-94e9-a23212e8ec5c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.464713 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03861098-f572-4ace-ab3b-7fddb749da7d-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-2vjw9\" (UID: \"03861098-f572-4ace-ab3b-7fddb749da7d\") " pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.464752 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/5aca942e-fa67-4679-a257-6db5cf93a95a-rbac\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.464778 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/5aca942e-fa67-4679-a257-6db5cf93a95a-tls-secret\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.464798 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aca942e-fa67-4679-a257-6db5cf93a95a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.464821 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/03861098-f572-4ace-ab3b-7fddb749da7d-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-2vjw9\" (UID: \"03861098-f572-4ace-ab3b-7fddb749da7d\") " pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.464851 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/03861098-f572-4ace-ab3b-7fddb749da7d-logging-loki-s3\") pod \"logging-loki-querier-76788598db-2vjw9\" (UID: \"03861098-f572-4ace-ab3b-7fddb749da7d\") " pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.464882 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03861098-f572-4ace-ab3b-7fddb749da7d-config\") pod \"logging-loki-querier-76788598db-2vjw9\" (UID: \"03861098-f572-4ace-ab3b-7fddb749da7d\") " pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.464913 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/5aca942e-fa67-4679-a257-6db5cf93a95a-tenants\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.464949 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/96cdf9bc-4893-4918-94e9-a23212e8ec5c-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-5cw9h\" (UID: \"96cdf9bc-4893-4918-94e9-a23212e8ec5c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.464974 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aca942e-fa67-4679-a257-6db5cf93a95a-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.464998 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96cdf9bc-4893-4918-94e9-a23212e8ec5c-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-5cw9h\" (UID: \"96cdf9bc-4893-4918-94e9-a23212e8ec5c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.465046 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96cdf9bc-4893-4918-94e9-a23212e8ec5c-config\") pod \"logging-loki-query-frontend-69d9546745-5cw9h\" (UID: \"96cdf9bc-4893-4918-94e9-a23212e8ec5c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.465082 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/5aca942e-fa67-4679-a257-6db5cf93a95a-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.465115 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pdxj\" (UniqueName: \"kubernetes.io/projected/96cdf9bc-4893-4918-94e9-a23212e8ec5c-kube-api-access-6pdxj\") pod \"logging-loki-query-frontend-69d9546745-5cw9h\" (UID: \"96cdf9bc-4893-4918-94e9-a23212e8ec5c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.465157 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/5aca942e-fa67-4679-a257-6db5cf93a95a-lokistack-gateway\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.465203 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxjs5\" (UniqueName: \"kubernetes.io/projected/5aca942e-fa67-4679-a257-6db5cf93a95a-kube-api-access-jxjs5\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.465235 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/03861098-f572-4ace-ab3b-7fddb749da7d-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-2vjw9\" (UID: \"03861098-f572-4ace-ab3b-7fddb749da7d\") " pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.465270 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z8rj\" (UniqueName: \"kubernetes.io/projected/03861098-f572-4ace-ab3b-7fddb749da7d-kube-api-access-7z8rj\") pod \"logging-loki-querier-76788598db-2vjw9\" (UID: \"03861098-f572-4ace-ab3b-7fddb749da7d\") " pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.468980 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03861098-f572-4ace-ab3b-7fddb749da7d-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-2vjw9\" (UID: \"03861098-f572-4ace-ab3b-7fddb749da7d\") " pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.469155 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.470349 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96cdf9bc-4893-4918-94e9-a23212e8ec5c-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-5cw9h\" (UID: \"96cdf9bc-4893-4918-94e9-a23212e8ec5c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.471227 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96cdf9bc-4893-4918-94e9-a23212e8ec5c-config\") pod \"logging-loki-query-frontend-69d9546745-5cw9h\" (UID: \"96cdf9bc-4893-4918-94e9-a23212e8ec5c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.473524 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.473733 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.474263 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.474553 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.474882 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/03861098-f572-4ace-ab3b-7fddb749da7d-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-2vjw9\" (UID: \"03861098-f572-4ace-ab3b-7fddb749da7d\") " pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.479094 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03861098-f572-4ace-ab3b-7fddb749da7d-config\") pod \"logging-loki-querier-76788598db-2vjw9\" (UID: \"03861098-f572-4ace-ab3b-7fddb749da7d\") " pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.479863 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/96cdf9bc-4893-4918-94e9-a23212e8ec5c-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-5cw9h\" (UID: \"96cdf9bc-4893-4918-94e9-a23212e8ec5c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.480156 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/03861098-f572-4ace-ab3b-7fddb749da7d-logging-loki-s3\") pod \"logging-loki-querier-76788598db-2vjw9\" (UID: \"03861098-f572-4ace-ab3b-7fddb749da7d\") " pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.496016 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/03861098-f572-4ace-ab3b-7fddb749da7d-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-2vjw9\" (UID: \"03861098-f572-4ace-ab3b-7fddb749da7d\") " pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.496052 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/96cdf9bc-4893-4918-94e9-a23212e8ec5c-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-5cw9h\" (UID: \"96cdf9bc-4893-4918-94e9-a23212e8ec5c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.497466 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z8rj\" (UniqueName: \"kubernetes.io/projected/03861098-f572-4ace-ab3b-7fddb749da7d-kube-api-access-7z8rj\") pod \"logging-loki-querier-76788598db-2vjw9\" (UID: \"03861098-f572-4ace-ab3b-7fddb749da7d\") " pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.508233 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5f48ff8847-c72c6"] Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.518034 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pdxj\" (UniqueName: \"kubernetes.io/projected/96cdf9bc-4893-4918-94e9-a23212e8ec5c-kube-api-access-6pdxj\") pod \"logging-loki-query-frontend-69d9546745-5cw9h\" (UID: \"96cdf9bc-4893-4918-94e9-a23212e8ec5c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.520834 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.536089 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-29nqd" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.539740 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f48ff8847-th7vf"] Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.551511 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f48ff8847-c72c6"] Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.567481 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/5aca942e-fa67-4679-a257-6db5cf93a95a-rbac\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.567528 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/5aca942e-fa67-4679-a257-6db5cf93a95a-tls-secret\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.567554 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aca942e-fa67-4679-a257-6db5cf93a95a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.567615 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/5aca942e-fa67-4679-a257-6db5cf93a95a-tenants\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.567654 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/46bec7ac-b95d-425d-ab7a-4a669278b158-lokistack-gateway\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.567685 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aca942e-fa67-4679-a257-6db5cf93a95a-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.567722 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/46bec7ac-b95d-425d-ab7a-4a669278b158-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.567742 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l98lw\" (UniqueName: \"kubernetes.io/projected/46bec7ac-b95d-425d-ab7a-4a669278b158-kube-api-access-l98lw\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.567779 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/46bec7ac-b95d-425d-ab7a-4a669278b158-rbac\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.567798 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46bec7ac-b95d-425d-ab7a-4a669278b158-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.567821 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/5aca942e-fa67-4679-a257-6db5cf93a95a-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.567858 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/5aca942e-fa67-4679-a257-6db5cf93a95a-lokistack-gateway\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.567883 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/46bec7ac-b95d-425d-ab7a-4a669278b158-tenants\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.567908 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46bec7ac-b95d-425d-ab7a-4a669278b158-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.567926 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxjs5\" (UniqueName: \"kubernetes.io/projected/5aca942e-fa67-4679-a257-6db5cf93a95a-kube-api-access-jxjs5\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.567948 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/46bec7ac-b95d-425d-ab7a-4a669278b158-tls-secret\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: E0123 16:28:38.570463 4718 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Jan 23 16:28:38 crc kubenswrapper[4718]: E0123 16:28:38.570930 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aca942e-fa67-4679-a257-6db5cf93a95a-tls-secret podName:5aca942e-fa67-4679-a257-6db5cf93a95a nodeName:}" failed. No retries permitted until 2026-01-23 16:28:39.07050751 +0000 UTC m=+720.217749501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/5aca942e-fa67-4679-a257-6db5cf93a95a-tls-secret") pod "logging-loki-gateway-5f48ff8847-th7vf" (UID: "5aca942e-fa67-4679-a257-6db5cf93a95a") : secret "logging-loki-gateway-http" not found Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.571890 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aca942e-fa67-4679-a257-6db5cf93a95a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.571894 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/5aca942e-fa67-4679-a257-6db5cf93a95a-lokistack-gateway\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.573138 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aca942e-fa67-4679-a257-6db5cf93a95a-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.575131 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.586204 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/5aca942e-fa67-4679-a257-6db5cf93a95a-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.586887 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/5aca942e-fa67-4679-a257-6db5cf93a95a-rbac\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.588087 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/5aca942e-fa67-4679-a257-6db5cf93a95a-tenants\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.605274 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxjs5\" (UniqueName: \"kubernetes.io/projected/5aca942e-fa67-4679-a257-6db5cf93a95a-kube-api-access-jxjs5\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.645758 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.672005 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l98lw\" (UniqueName: \"kubernetes.io/projected/46bec7ac-b95d-425d-ab7a-4a669278b158-kube-api-access-l98lw\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.672056 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/46bec7ac-b95d-425d-ab7a-4a669278b158-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.672100 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/46bec7ac-b95d-425d-ab7a-4a669278b158-rbac\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.672118 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46bec7ac-b95d-425d-ab7a-4a669278b158-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.672161 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/46bec7ac-b95d-425d-ab7a-4a669278b158-tenants\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.672186 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46bec7ac-b95d-425d-ab7a-4a669278b158-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.672207 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/46bec7ac-b95d-425d-ab7a-4a669278b158-tls-secret\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.672281 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/46bec7ac-b95d-425d-ab7a-4a669278b158-lokistack-gateway\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.673335 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46bec7ac-b95d-425d-ab7a-4a669278b158-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.674794 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46bec7ac-b95d-425d-ab7a-4a669278b158-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.674821 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/46bec7ac-b95d-425d-ab7a-4a669278b158-lokistack-gateway\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: E0123 16:28:38.674888 4718 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Jan 23 16:28:38 crc kubenswrapper[4718]: E0123 16:28:38.674931 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46bec7ac-b95d-425d-ab7a-4a669278b158-tls-secret podName:46bec7ac-b95d-425d-ab7a-4a669278b158 nodeName:}" failed. No retries permitted until 2026-01-23 16:28:39.174919027 +0000 UTC m=+720.322161018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/46bec7ac-b95d-425d-ab7a-4a669278b158-tls-secret") pod "logging-loki-gateway-5f48ff8847-c72c6" (UID: "46bec7ac-b95d-425d-ab7a-4a669278b158") : secret "logging-loki-gateway-http" not found Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.675613 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/46bec7ac-b95d-425d-ab7a-4a669278b158-rbac\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.676488 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/46bec7ac-b95d-425d-ab7a-4a669278b158-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.681410 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/46bec7ac-b95d-425d-ab7a-4a669278b158-tenants\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.694883 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l98lw\" (UniqueName: \"kubernetes.io/projected/46bec7ac-b95d-425d-ab7a-4a669278b158-kube-api-access-l98lw\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.814855 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-2vjw9"] Jan 23 16:28:38 crc kubenswrapper[4718]: I0123 16:28:38.941510 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn"] Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.079231 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/5aca942e-fa67-4679-a257-6db5cf93a95a-tls-secret\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.084929 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/5aca942e-fa67-4679-a257-6db5cf93a95a-tls-secret\") pod \"logging-loki-gateway-5f48ff8847-th7vf\" (UID: \"5aca942e-fa67-4679-a257-6db5cf93a95a\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.135821 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.154978 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h"] Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.186410 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/46bec7ac-b95d-425d-ab7a-4a669278b158-tls-secret\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.196538 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/46bec7ac-b95d-425d-ab7a-4a669278b158-tls-secret\") pod \"logging-loki-gateway-5f48ff8847-c72c6\" (UID: \"46bec7ac-b95d-425d-ab7a-4a669278b158\") " pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.268199 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.269571 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.272048 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.272307 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.282477 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.288871 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.289884 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.292480 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" event={"ID":"c2309723-2af5-455a-8f21-41e08e80d045","Type":"ContainerStarted","Data":"7ff7c9841bcae6de61461b9a32a3ded86fdd7e0ecbe087daf2e169572c30bf1f"} Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.294064 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.294556 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.309726 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" event={"ID":"03861098-f572-4ace-ab3b-7fddb749da7d","Type":"ContainerStarted","Data":"5d602165b98ac3e06e61e81d1f8a0c3d84760b31aa59ec6fbaf43f59d6e66d58"} Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.315945 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" event={"ID":"96cdf9bc-4893-4918-94e9-a23212e8ec5c","Type":"ContainerStarted","Data":"3b562d3e0f61c53fb50032a40f65033c2529227ea961d0469a84701deee725f0"} Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.317726 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.372873 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.374255 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.378822 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.378880 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.387267 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.389815 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e77e6a01-769f-4ad6-bacb-c345917d01c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e77e6a01-769f-4ad6-bacb-c345917d01c3\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.389868 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-58bf8df4-39f5-4d0c-99e6-9baee00cb52c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58bf8df4-39f5-4d0c-99e6-9baee00cb52c\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.389913 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdb096c7-cef2-48a8-9f83-4752311a02be-config\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.389958 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdb096c7-cef2-48a8-9f83-4752311a02be-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.390006 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8576\" (UniqueName: \"kubernetes.io/projected/cdb096c7-cef2-48a8-9f83-4752311a02be-kube-api-access-j8576\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.390040 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/cdb096c7-cef2-48a8-9f83-4752311a02be-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.390057 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cdb096c7-cef2-48a8-9f83-4752311a02be-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.390080 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/cdb096c7-cef2-48a8-9f83-4752311a02be-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.453057 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-29nqd" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.461834 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492239 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/014d7cb2-435f-4a6f-85af-6bc6553d6704-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492311 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8576\" (UniqueName: \"kubernetes.io/projected/cdb096c7-cef2-48a8-9f83-4752311a02be-kube-api-access-j8576\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492339 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/cdb096c7-cef2-48a8-9f83-4752311a02be-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492361 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cdb096c7-cef2-48a8-9f83-4752311a02be-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492380 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m46k6\" (UniqueName: \"kubernetes.io/projected/c3a642bb-f3f3-4e14-9442-0aa47e1b7b43-kube-api-access-m46k6\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492404 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/cdb096c7-cef2-48a8-9f83-4752311a02be-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492428 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/014d7cb2-435f-4a6f-85af-6bc6553d6704-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492459 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c3a642bb-f3f3-4e14-9442-0aa47e1b7b43-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492478 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvhds\" (UniqueName: \"kubernetes.io/projected/014d7cb2-435f-4a6f-85af-6bc6553d6704-kube-api-access-wvhds\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492495 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/014d7cb2-435f-4a6f-85af-6bc6553d6704-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492516 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdb096c7-cef2-48a8-9f83-4752311a02be-config\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492542 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdb096c7-cef2-48a8-9f83-4752311a02be-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492564 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e81d4f62-1ea9-40f3-b873-09c6e9aedf76\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e81d4f62-1ea9-40f3-b873-09c6e9aedf76\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492589 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/014d7cb2-435f-4a6f-85af-6bc6553d6704-config\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492610 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/c3a642bb-f3f3-4e14-9442-0aa47e1b7b43-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492646 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dbe0d784-93c0-4c21-bfdb-a2670d7a7643\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dbe0d784-93c0-4c21-bfdb-a2670d7a7643\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492676 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/c3a642bb-f3f3-4e14-9442-0aa47e1b7b43-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492875 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3a642bb-f3f3-4e14-9442-0aa47e1b7b43-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492895 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a642bb-f3f3-4e14-9442-0aa47e1b7b43-config\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492913 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e77e6a01-769f-4ad6-bacb-c345917d01c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e77e6a01-769f-4ad6-bacb-c345917d01c3\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492939 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-58bf8df4-39f5-4d0c-99e6-9baee00cb52c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58bf8df4-39f5-4d0c-99e6-9baee00cb52c\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.492956 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/014d7cb2-435f-4a6f-85af-6bc6553d6704-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.494144 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdb096c7-cef2-48a8-9f83-4752311a02be-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.496746 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdb096c7-cef2-48a8-9f83-4752311a02be-config\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.498036 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.498082 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-58bf8df4-39f5-4d0c-99e6-9baee00cb52c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58bf8df4-39f5-4d0c-99e6-9baee00cb52c\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8d5f2b86b7ee93484b286a0a7af0eece79faacb68fa23e110c8cc1270c4da1e2/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.498111 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.498138 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e77e6a01-769f-4ad6-bacb-c345917d01c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e77e6a01-769f-4ad6-bacb-c345917d01c3\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b0ba30e6339a33bf4851247143684819aa7a5ddb43fbbc66f226fbecda54c228/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.501211 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/cdb096c7-cef2-48a8-9f83-4752311a02be-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.501605 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/cdb096c7-cef2-48a8-9f83-4752311a02be-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.502249 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cdb096c7-cef2-48a8-9f83-4752311a02be-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.511024 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8576\" (UniqueName: \"kubernetes.io/projected/cdb096c7-cef2-48a8-9f83-4752311a02be-kube-api-access-j8576\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.526862 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e77e6a01-769f-4ad6-bacb-c345917d01c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e77e6a01-769f-4ad6-bacb-c345917d01c3\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.528093 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-58bf8df4-39f5-4d0c-99e6-9baee00cb52c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58bf8df4-39f5-4d0c-99e6-9baee00cb52c\") pod \"logging-loki-ingester-0\" (UID: \"cdb096c7-cef2-48a8-9f83-4752311a02be\") " pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.594564 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/014d7cb2-435f-4a6f-85af-6bc6553d6704-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.594679 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m46k6\" (UniqueName: \"kubernetes.io/projected/c3a642bb-f3f3-4e14-9442-0aa47e1b7b43-kube-api-access-m46k6\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.594741 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/014d7cb2-435f-4a6f-85af-6bc6553d6704-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.594805 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c3a642bb-f3f3-4e14-9442-0aa47e1b7b43-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.594835 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvhds\" (UniqueName: \"kubernetes.io/projected/014d7cb2-435f-4a6f-85af-6bc6553d6704-kube-api-access-wvhds\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.594864 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/014d7cb2-435f-4a6f-85af-6bc6553d6704-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.594923 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e81d4f62-1ea9-40f3-b873-09c6e9aedf76\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e81d4f62-1ea9-40f3-b873-09c6e9aedf76\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.594977 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/014d7cb2-435f-4a6f-85af-6bc6553d6704-config\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.595004 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/c3a642bb-f3f3-4e14-9442-0aa47e1b7b43-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.595056 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dbe0d784-93c0-4c21-bfdb-a2670d7a7643\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dbe0d784-93c0-4c21-bfdb-a2670d7a7643\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.595099 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/c3a642bb-f3f3-4e14-9442-0aa47e1b7b43-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.595144 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3a642bb-f3f3-4e14-9442-0aa47e1b7b43-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.595167 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a642bb-f3f3-4e14-9442-0aa47e1b7b43-config\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.595214 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/014d7cb2-435f-4a6f-85af-6bc6553d6704-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.598051 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/014d7cb2-435f-4a6f-85af-6bc6553d6704-config\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.599063 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/014d7cb2-435f-4a6f-85af-6bc6553d6704-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.601310 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/014d7cb2-435f-4a6f-85af-6bc6553d6704-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.602776 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.603350 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/014d7cb2-435f-4a6f-85af-6bc6553d6704-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.604530 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a642bb-f3f3-4e14-9442-0aa47e1b7b43-config\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.606524 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/c3a642bb-f3f3-4e14-9442-0aa47e1b7b43-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.606646 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/c3a642bb-f3f3-4e14-9442-0aa47e1b7b43-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.608366 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3a642bb-f3f3-4e14-9442-0aa47e1b7b43-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.608369 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f48ff8847-th7vf"] Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.615156 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/014d7cb2-435f-4a6f-85af-6bc6553d6704-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.617865 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.617920 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dbe0d784-93c0-4c21-bfdb-a2670d7a7643\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dbe0d784-93c0-4c21-bfdb-a2670d7a7643\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/14afa0c7620739deada860f6217d76c3d5f0d3bb543839b4812841177d45451e/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.618098 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.618135 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e81d4f62-1ea9-40f3-b873-09c6e9aedf76\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e81d4f62-1ea9-40f3-b873-09c6e9aedf76\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/36c88ae1b38260a6d5e34beade8e5d152b6ebd2f5f4ce519ba12153b0f0ac185/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.619763 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m46k6\" (UniqueName: \"kubernetes.io/projected/c3a642bb-f3f3-4e14-9442-0aa47e1b7b43-kube-api-access-m46k6\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.623763 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c3a642bb-f3f3-4e14-9442-0aa47e1b7b43-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.625790 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvhds\" (UniqueName: \"kubernetes.io/projected/014d7cb2-435f-4a6f-85af-6bc6553d6704-kube-api-access-wvhds\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.656297 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dbe0d784-93c0-4c21-bfdb-a2670d7a7643\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dbe0d784-93c0-4c21-bfdb-a2670d7a7643\") pod \"logging-loki-index-gateway-0\" (UID: \"014d7cb2-435f-4a6f-85af-6bc6553d6704\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.685070 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e81d4f62-1ea9-40f3-b873-09c6e9aedf76\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e81d4f62-1ea9-40f3-b873-09c6e9aedf76\") pod \"logging-loki-compactor-0\" (UID: \"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43\") " pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.732251 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.932112 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.961723 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f48ff8847-c72c6"] Jan 23 16:28:39 crc kubenswrapper[4718]: I0123 16:28:39.967495 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 23 16:28:40 crc kubenswrapper[4718]: I0123 16:28:40.327858 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 23 16:28:40 crc kubenswrapper[4718]: I0123 16:28:40.328302 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"cdb096c7-cef2-48a8-9f83-4752311a02be","Type":"ContainerStarted","Data":"0b7a0f182200d77895cc2c0b20301167285a341f88df820b80d88fe4abde6479"} Jan 23 16:28:40 crc kubenswrapper[4718]: I0123 16:28:40.330017 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" event={"ID":"46bec7ac-b95d-425d-ab7a-4a669278b158","Type":"ContainerStarted","Data":"8b82eedc397a8072cd0f28a8f14c50a86a5be66f301be9ea1fd9ea30f6d4aa22"} Jan 23 16:28:40 crc kubenswrapper[4718]: I0123 16:28:40.333552 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 23 16:28:40 crc kubenswrapper[4718]: I0123 16:28:40.334755 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" event={"ID":"5aca942e-fa67-4679-a257-6db5cf93a95a","Type":"ContainerStarted","Data":"0325a51ff606d24788ca0fa802cb155ad0b01e1943696200252ac0e0d1690a2d"} Jan 23 16:28:40 crc kubenswrapper[4718]: W0123 16:28:40.343757 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod014d7cb2_435f_4a6f_85af_6bc6553d6704.slice/crio-7e03687619a8663d207858f294acf5840e3fa4d9b72805836c46c657cd212e80 WatchSource:0}: Error finding container 7e03687619a8663d207858f294acf5840e3fa4d9b72805836c46c657cd212e80: Status 404 returned error can't find the container with id 7e03687619a8663d207858f294acf5840e3fa4d9b72805836c46c657cd212e80 Jan 23 16:28:40 crc kubenswrapper[4718]: W0123 16:28:40.346086 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3a642bb_f3f3_4e14_9442_0aa47e1b7b43.slice/crio-6ff03059fadc0ef872f9a81b5cd1411d40b5b7762ea5be6dfdcd87a97f24880d WatchSource:0}: Error finding container 6ff03059fadc0ef872f9a81b5cd1411d40b5b7762ea5be6dfdcd87a97f24880d: Status 404 returned error can't find the container with id 6ff03059fadc0ef872f9a81b5cd1411d40b5b7762ea5be6dfdcd87a97f24880d Jan 23 16:28:41 crc kubenswrapper[4718]: I0123 16:28:41.343432 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43","Type":"ContainerStarted","Data":"6ff03059fadc0ef872f9a81b5cd1411d40b5b7762ea5be6dfdcd87a97f24880d"} Jan 23 16:28:41 crc kubenswrapper[4718]: I0123 16:28:41.345731 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"014d7cb2-435f-4a6f-85af-6bc6553d6704","Type":"ContainerStarted","Data":"7e03687619a8663d207858f294acf5840e3fa4d9b72805836c46c657cd212e80"} Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.363401 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" event={"ID":"5aca942e-fa67-4679-a257-6db5cf93a95a","Type":"ContainerStarted","Data":"0fd6178209fdd7805280d75e3f68ad22a7b899b6a0f809770e08990fc9b30ffe"} Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.365913 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" event={"ID":"03861098-f572-4ace-ab3b-7fddb749da7d","Type":"ContainerStarted","Data":"fe69ba9296730173f78c4d07b7206014c4aae8499138a803710eb7deea116a08"} Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.366362 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.376960 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"014d7cb2-435f-4a6f-85af-6bc6553d6704","Type":"ContainerStarted","Data":"629ef977e166600d0040c7672de81e7e9b01fc2ca25d549fbb1343818c0e1dea"} Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.377979 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.379552 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" event={"ID":"96cdf9bc-4893-4918-94e9-a23212e8ec5c","Type":"ContainerStarted","Data":"70d8a91d763f5eb94dbea362d518667b191e7885798f0adb4c9696178f029c07"} Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.380120 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.381484 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" event={"ID":"c2309723-2af5-455a-8f21-41e08e80d045","Type":"ContainerStarted","Data":"e69de84a14fa47897716fa62f7acfedeb5daad1bade57679379ab32c9836d36c"} Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.382007 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.408651 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" podStartSLOduration=1.539122844 podStartE2EDuration="5.408611024s" podCreationTimestamp="2026-01-23 16:28:38 +0000 UTC" firstStartedPulling="2026-01-23 16:28:38.823978072 +0000 UTC m=+719.971220063" lastFinishedPulling="2026-01-23 16:28:42.693466212 +0000 UTC m=+723.840708243" observedRunningTime="2026-01-23 16:28:43.393046608 +0000 UTC m=+724.540288599" watchObservedRunningTime="2026-01-23 16:28:43.408611024 +0000 UTC m=+724.555853015" Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.417479 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"cdb096c7-cef2-48a8-9f83-4752311a02be","Type":"ContainerStarted","Data":"77b9f9dea981e0e99253c15475c1d3330af8cf078bb158d1eb8c0ca0b559af17"} Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.417800 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.420542 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" event={"ID":"46bec7ac-b95d-425d-ab7a-4a669278b158","Type":"ContainerStarted","Data":"49929e99be8a03bc76de0b79dafbdc8d8e0067f160cefab40b015b55659d7cd8"} Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.426038 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"c3a642bb-f3f3-4e14-9442-0aa47e1b7b43","Type":"ContainerStarted","Data":"38d3e1950f4052a4f4e4e7a3042742ca53a1d48d3ad076d9d81c8bfa57b13aed"} Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.426501 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.445124 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" podStartSLOduration=1.6804930150000001 podStartE2EDuration="5.445101293s" podCreationTimestamp="2026-01-23 16:28:38 +0000 UTC" firstStartedPulling="2026-01-23 16:28:38.927787933 +0000 UTC m=+720.075029914" lastFinishedPulling="2026-01-23 16:28:42.692396151 +0000 UTC m=+723.839638192" observedRunningTime="2026-01-23 16:28:43.436146923 +0000 UTC m=+724.583388914" watchObservedRunningTime="2026-01-23 16:28:43.445101293 +0000 UTC m=+724.592343284" Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.465484 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" podStartSLOduration=1.998164772 podStartE2EDuration="5.465454542s" podCreationTimestamp="2026-01-23 16:28:38 +0000 UTC" firstStartedPulling="2026-01-23 16:28:39.166822382 +0000 UTC m=+720.314064373" lastFinishedPulling="2026-01-23 16:28:42.634112152 +0000 UTC m=+723.781354143" observedRunningTime="2026-01-23 16:28:43.453187879 +0000 UTC m=+724.600429870" watchObservedRunningTime="2026-01-23 16:28:43.465454542 +0000 UTC m=+724.612696533" Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.483380 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.130186502 podStartE2EDuration="5.483349562s" podCreationTimestamp="2026-01-23 16:28:38 +0000 UTC" firstStartedPulling="2026-01-23 16:28:40.350579728 +0000 UTC m=+721.497821719" lastFinishedPulling="2026-01-23 16:28:42.703742748 +0000 UTC m=+723.850984779" observedRunningTime="2026-01-23 16:28:43.482029085 +0000 UTC m=+724.629271076" watchObservedRunningTime="2026-01-23 16:28:43.483349562 +0000 UTC m=+724.630591553" Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.505998 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.151522258 podStartE2EDuration="5.505975584s" podCreationTimestamp="2026-01-23 16:28:38 +0000 UTC" firstStartedPulling="2026-01-23 16:28:40.349006424 +0000 UTC m=+721.496248415" lastFinishedPulling="2026-01-23 16:28:42.70345971 +0000 UTC m=+723.850701741" observedRunningTime="2026-01-23 16:28:43.503262769 +0000 UTC m=+724.650504810" watchObservedRunningTime="2026-01-23 16:28:43.505975584 +0000 UTC m=+724.653217575" Jan 23 16:28:43 crc kubenswrapper[4718]: I0123 16:28:43.532603 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=2.8599886310000002 podStartE2EDuration="5.532583967s" podCreationTimestamp="2026-01-23 16:28:38 +0000 UTC" firstStartedPulling="2026-01-23 16:28:40.023829228 +0000 UTC m=+721.171071219" lastFinishedPulling="2026-01-23 16:28:42.696424514 +0000 UTC m=+723.843666555" observedRunningTime="2026-01-23 16:28:43.527379902 +0000 UTC m=+724.674621893" watchObservedRunningTime="2026-01-23 16:28:43.532583967 +0000 UTC m=+724.679825958" Jan 23 16:28:45 crc kubenswrapper[4718]: I0123 16:28:45.447724 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" event={"ID":"46bec7ac-b95d-425d-ab7a-4a669278b158","Type":"ContainerStarted","Data":"b25097068a66de9cbef34195ccd3215b16a07f1bf3d0a6c0e0e40b40f8415dc6"} Jan 23 16:28:45 crc kubenswrapper[4718]: I0123 16:28:45.448182 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:45 crc kubenswrapper[4718]: I0123 16:28:45.448210 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:45 crc kubenswrapper[4718]: I0123 16:28:45.454037 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" event={"ID":"5aca942e-fa67-4679-a257-6db5cf93a95a","Type":"ContainerStarted","Data":"c6a4e14ee7f28e869441ec1faa8cff4448f386f17988e4726d0330835c70e2e4"} Jan 23 16:28:45 crc kubenswrapper[4718]: I0123 16:28:45.455676 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:45 crc kubenswrapper[4718]: I0123 16:28:45.455723 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:45 crc kubenswrapper[4718]: I0123 16:28:45.465930 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:45 crc kubenswrapper[4718]: I0123 16:28:45.470546 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:45 crc kubenswrapper[4718]: I0123 16:28:45.472791 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" Jan 23 16:28:45 crc kubenswrapper[4718]: I0123 16:28:45.474311 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" Jan 23 16:28:45 crc kubenswrapper[4718]: I0123 16:28:45.529275 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" podStartSLOduration=2.57038942 podStartE2EDuration="7.529236737s" podCreationTimestamp="2026-01-23 16:28:38 +0000 UTC" firstStartedPulling="2026-01-23 16:28:40.031159203 +0000 UTC m=+721.178401194" lastFinishedPulling="2026-01-23 16:28:44.99000652 +0000 UTC m=+726.137248511" observedRunningTime="2026-01-23 16:28:45.488814947 +0000 UTC m=+726.636056948" watchObservedRunningTime="2026-01-23 16:28:45.529236737 +0000 UTC m=+726.676478798" Jan 23 16:28:45 crc kubenswrapper[4718]: I0123 16:28:45.592383 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" podStartSLOduration=2.254549695 podStartE2EDuration="7.5923555s" podCreationTimestamp="2026-01-23 16:28:38 +0000 UTC" firstStartedPulling="2026-01-23 16:28:39.643668336 +0000 UTC m=+720.790910327" lastFinishedPulling="2026-01-23 16:28:44.981474131 +0000 UTC m=+726.128716132" observedRunningTime="2026-01-23 16:28:45.586393954 +0000 UTC m=+726.733635945" watchObservedRunningTime="2026-01-23 16:28:45.5923555 +0000 UTC m=+726.739597501" Jan 23 16:28:58 crc kubenswrapper[4718]: I0123 16:28:58.424800 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tpqfn" Jan 23 16:28:58 crc kubenswrapper[4718]: I0123 16:28:58.585787 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-2vjw9" Jan 23 16:28:58 crc kubenswrapper[4718]: I0123 16:28:58.660858 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" Jan 23 16:28:59 crc kubenswrapper[4718]: I0123 16:28:59.613881 4718 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 23 16:28:59 crc kubenswrapper[4718]: I0123 16:28:59.613991 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="cdb096c7-cef2-48a8-9f83-4752311a02be" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 23 16:28:59 crc kubenswrapper[4718]: I0123 16:28:59.742938 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Jan 23 16:28:59 crc kubenswrapper[4718]: I0123 16:28:59.941186 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Jan 23 16:29:09 crc kubenswrapper[4718]: I0123 16:29:09.611883 4718 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 23 16:29:09 crc kubenswrapper[4718]: I0123 16:29:09.612950 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="cdb096c7-cef2-48a8-9f83-4752311a02be" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 23 16:29:12 crc kubenswrapper[4718]: I0123 16:29:12.280586 4718 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 16:29:19 crc kubenswrapper[4718]: I0123 16:29:19.611006 4718 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 23 16:29:19 crc kubenswrapper[4718]: I0123 16:29:19.611537 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="cdb096c7-cef2-48a8-9f83-4752311a02be" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 23 16:29:29 crc kubenswrapper[4718]: I0123 16:29:29.609341 4718 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 23 16:29:29 crc kubenswrapper[4718]: I0123 16:29:29.610295 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="cdb096c7-cef2-48a8-9f83-4752311a02be" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 23 16:29:39 crc kubenswrapper[4718]: I0123 16:29:39.612836 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.510303 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-n8c97"] Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.514986 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.519311 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.521276 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.521748 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.533118 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.536792 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-n8c97"] Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.539767 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-8s2gn" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.541220 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.643363 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-config\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.643454 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-config-openshift-service-cacrt\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.643578 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wmnr\" (UniqueName: \"kubernetes.io/projected/4faa264d-dd12-45b6-b533-b30d0f51f194-kube-api-access-4wmnr\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.643677 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-trusted-ca\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.644037 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-entrypoint\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.644192 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-metrics\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.644383 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4faa264d-dd12-45b6-b533-b30d0f51f194-tmp\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.644420 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/4faa264d-dd12-45b6-b533-b30d0f51f194-sa-token\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.644557 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/4faa264d-dd12-45b6-b533-b30d0f51f194-datadir\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.644598 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-collector-token\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.644705 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-collector-syslog-receiver\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.693666 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-n8c97"] Jan 23 16:29:57 crc kubenswrapper[4718]: E0123 16:29:57.694488 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-4wmnr metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-n8c97" podUID="4faa264d-dd12-45b6-b533-b30d0f51f194" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.746620 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-config\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.746720 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-config-openshift-service-cacrt\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.746758 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wmnr\" (UniqueName: \"kubernetes.io/projected/4faa264d-dd12-45b6-b533-b30d0f51f194-kube-api-access-4wmnr\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.746790 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-trusted-ca\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.746857 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-entrypoint\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.746900 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-metrics\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.746951 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4faa264d-dd12-45b6-b533-b30d0f51f194-tmp\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.746982 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/4faa264d-dd12-45b6-b533-b30d0f51f194-sa-token\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.747020 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/4faa264d-dd12-45b6-b533-b30d0f51f194-datadir\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.747045 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-collector-token\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.747074 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-collector-syslog-receiver\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: E0123 16:29:57.747281 4718 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Jan 23 16:29:57 crc kubenswrapper[4718]: E0123 16:29:57.747362 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-collector-syslog-receiver podName:4faa264d-dd12-45b6-b533-b30d0f51f194 nodeName:}" failed. No retries permitted until 2026-01-23 16:29:58.247333056 +0000 UTC m=+799.394575057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-collector-syslog-receiver") pod "collector-n8c97" (UID: "4faa264d-dd12-45b6-b533-b30d0f51f194") : secret "collector-syslog-receiver" not found Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.747833 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-config-openshift-service-cacrt\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.747936 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-config\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: E0123 16:29:57.748007 4718 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.748017 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/4faa264d-dd12-45b6-b533-b30d0f51f194-datadir\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: E0123 16:29:57.748048 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-metrics podName:4faa264d-dd12-45b6-b533-b30d0f51f194 nodeName:}" failed. No retries permitted until 2026-01-23 16:29:58.248036256 +0000 UTC m=+799.395278257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-metrics") pod "collector-n8c97" (UID: "4faa264d-dd12-45b6-b533-b30d0f51f194") : secret "collector-metrics" not found Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.748373 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-entrypoint\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.748390 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-trusted-ca\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.758493 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4faa264d-dd12-45b6-b533-b30d0f51f194-tmp\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.758757 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-collector-token\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.771692 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wmnr\" (UniqueName: \"kubernetes.io/projected/4faa264d-dd12-45b6-b533-b30d0f51f194-kube-api-access-4wmnr\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:57 crc kubenswrapper[4718]: I0123 16:29:57.775996 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/4faa264d-dd12-45b6-b533-b30d0f51f194-sa-token\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.163811 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-n8c97" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.176313 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-n8c97" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.255666 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4faa264d-dd12-45b6-b533-b30d0f51f194-tmp\") pod \"4faa264d-dd12-45b6-b533-b30d0f51f194\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.255744 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-entrypoint\") pod \"4faa264d-dd12-45b6-b533-b30d0f51f194\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.255795 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/4faa264d-dd12-45b6-b533-b30d0f51f194-sa-token\") pod \"4faa264d-dd12-45b6-b533-b30d0f51f194\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.255875 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-config-openshift-service-cacrt\") pod \"4faa264d-dd12-45b6-b533-b30d0f51f194\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.255944 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-config\") pod \"4faa264d-dd12-45b6-b533-b30d0f51f194\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.255968 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wmnr\" (UniqueName: \"kubernetes.io/projected/4faa264d-dd12-45b6-b533-b30d0f51f194-kube-api-access-4wmnr\") pod \"4faa264d-dd12-45b6-b533-b30d0f51f194\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.256044 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/4faa264d-dd12-45b6-b533-b30d0f51f194-datadir\") pod \"4faa264d-dd12-45b6-b533-b30d0f51f194\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.256063 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-collector-token\") pod \"4faa264d-dd12-45b6-b533-b30d0f51f194\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.256098 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-trusted-ca\") pod \"4faa264d-dd12-45b6-b533-b30d0f51f194\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.256361 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-collector-syslog-receiver\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.256512 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-metrics\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.256962 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4faa264d-dd12-45b6-b533-b30d0f51f194-datadir" (OuterVolumeSpecName: "datadir") pod "4faa264d-dd12-45b6-b533-b30d0f51f194" (UID: "4faa264d-dd12-45b6-b533-b30d0f51f194"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.257483 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-config" (OuterVolumeSpecName: "config") pod "4faa264d-dd12-45b6-b533-b30d0f51f194" (UID: "4faa264d-dd12-45b6-b533-b30d0f51f194"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.258163 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4faa264d-dd12-45b6-b533-b30d0f51f194" (UID: "4faa264d-dd12-45b6-b533-b30d0f51f194"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.258297 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "4faa264d-dd12-45b6-b533-b30d0f51f194" (UID: "4faa264d-dd12-45b6-b533-b30d0f51f194"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.258678 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "4faa264d-dd12-45b6-b533-b30d0f51f194" (UID: "4faa264d-dd12-45b6-b533-b30d0f51f194"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.260904 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4faa264d-dd12-45b6-b533-b30d0f51f194-kube-api-access-4wmnr" (OuterVolumeSpecName: "kube-api-access-4wmnr") pod "4faa264d-dd12-45b6-b533-b30d0f51f194" (UID: "4faa264d-dd12-45b6-b533-b30d0f51f194"). InnerVolumeSpecName "kube-api-access-4wmnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.261287 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-collector-token" (OuterVolumeSpecName: "collector-token") pod "4faa264d-dd12-45b6-b533-b30d0f51f194" (UID: "4faa264d-dd12-45b6-b533-b30d0f51f194"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.261992 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4faa264d-dd12-45b6-b533-b30d0f51f194-tmp" (OuterVolumeSpecName: "tmp") pod "4faa264d-dd12-45b6-b533-b30d0f51f194" (UID: "4faa264d-dd12-45b6-b533-b30d0f51f194"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.262070 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-metrics\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.262593 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4faa264d-dd12-45b6-b533-b30d0f51f194-sa-token" (OuterVolumeSpecName: "sa-token") pod "4faa264d-dd12-45b6-b533-b30d0f51f194" (UID: "4faa264d-dd12-45b6-b533-b30d0f51f194"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.263068 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-collector-syslog-receiver\") pod \"collector-n8c97\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " pod="openshift-logging/collector-n8c97" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.358002 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-metrics\") pod \"4faa264d-dd12-45b6-b533-b30d0f51f194\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.358286 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-collector-syslog-receiver\") pod \"4faa264d-dd12-45b6-b533-b30d0f51f194\" (UID: \"4faa264d-dd12-45b6-b533-b30d0f51f194\") " Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.358866 4718 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/4faa264d-dd12-45b6-b533-b30d0f51f194-datadir\") on node \"crc\" DevicePath \"\"" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.358896 4718 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-collector-token\") on node \"crc\" DevicePath \"\"" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.358912 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.358921 4718 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4faa264d-dd12-45b6-b533-b30d0f51f194-tmp\") on node \"crc\" DevicePath \"\"" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.358930 4718 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-entrypoint\") on node \"crc\" DevicePath \"\"" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.358940 4718 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/4faa264d-dd12-45b6-b533-b30d0f51f194-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.358949 4718 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.358959 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wmnr\" (UniqueName: \"kubernetes.io/projected/4faa264d-dd12-45b6-b533-b30d0f51f194-kube-api-access-4wmnr\") on node \"crc\" DevicePath \"\"" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.358969 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4faa264d-dd12-45b6-b533-b30d0f51f194-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.361765 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "4faa264d-dd12-45b6-b533-b30d0f51f194" (UID: "4faa264d-dd12-45b6-b533-b30d0f51f194"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.361825 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-metrics" (OuterVolumeSpecName: "metrics") pod "4faa264d-dd12-45b6-b533-b30d0f51f194" (UID: "4faa264d-dd12-45b6-b533-b30d0f51f194"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.460310 4718 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.460357 4718 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/4faa264d-dd12-45b6-b533-b30d0f51f194-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.875565 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:29:58 crc kubenswrapper[4718]: I0123 16:29:58.875706 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.172193 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-n8c97" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.277907 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-n8c97"] Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.300159 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-n8c97"] Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.319700 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-b6hvd"] Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.320928 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.326991 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-8s2gn" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.327165 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.327535 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.328022 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-b6hvd"] Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.328053 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.328374 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.334995 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.481164 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-collector-token\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.481657 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-collector-syslog-receiver\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.481697 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-metrics\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.481852 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-datadir\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.482006 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-config\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.482093 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-trusted-ca\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.482142 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llzlt\" (UniqueName: \"kubernetes.io/projected/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-kube-api-access-llzlt\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.482181 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-tmp\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.482226 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-sa-token\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.482437 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-config-openshift-service-cacrt\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.482602 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-entrypoint\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.585088 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-config\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.585175 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-trusted-ca\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.585221 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llzlt\" (UniqueName: \"kubernetes.io/projected/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-kube-api-access-llzlt\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.585271 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-tmp\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.585319 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-sa-token\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.585440 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-config-openshift-service-cacrt\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.585496 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-entrypoint\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.585556 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-collector-token\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.585609 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-collector-syslog-receiver\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.585686 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-metrics\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.585762 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-datadir\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.585915 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-datadir\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.586305 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-config\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.587106 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-config-openshift-service-cacrt\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.587454 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-entrypoint\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.588496 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-trusted-ca\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.593829 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-collector-syslog-receiver\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.596108 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-metrics\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.603794 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-collector-token\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.605659 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-tmp\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.614673 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llzlt\" (UniqueName: \"kubernetes.io/projected/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-kube-api-access-llzlt\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.624144 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e45ebe67-eb65-4bf4-8d7b-f03a7113f22e-sa-token\") pod \"collector-b6hvd\" (UID: \"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e\") " pod="openshift-logging/collector-b6hvd" Jan 23 16:29:59 crc kubenswrapper[4718]: I0123 16:29:59.645898 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-b6hvd" Jan 23 16:30:00 crc kubenswrapper[4718]: I0123 16:30:00.153430 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4"] Jan 23 16:30:00 crc kubenswrapper[4718]: I0123 16:30:00.155176 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4" Jan 23 16:30:00 crc kubenswrapper[4718]: I0123 16:30:00.160700 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-b6hvd"] Jan 23 16:30:00 crc kubenswrapper[4718]: I0123 16:30:00.165889 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 16:30:00 crc kubenswrapper[4718]: I0123 16:30:00.168584 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 16:30:00 crc kubenswrapper[4718]: I0123 16:30:00.178186 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4"] Jan 23 16:30:00 crc kubenswrapper[4718]: I0123 16:30:00.196824 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-b6hvd" event={"ID":"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e","Type":"ContainerStarted","Data":"db3252129c9366d82920d22e79ce6a081f6be79d4dba8a4de35ecc29c8107566"} Jan 23 16:30:00 crc kubenswrapper[4718]: I0123 16:30:00.301148 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6ndq\" (UniqueName: \"kubernetes.io/projected/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8-kube-api-access-c6ndq\") pod \"collect-profiles-29486430-k48z4\" (UID: \"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4" Jan 23 16:30:00 crc kubenswrapper[4718]: I0123 16:30:00.301254 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8-secret-volume\") pod \"collect-profiles-29486430-k48z4\" (UID: \"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4" Jan 23 16:30:00 crc kubenswrapper[4718]: I0123 16:30:00.301337 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8-config-volume\") pod \"collect-profiles-29486430-k48z4\" (UID: \"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4" Jan 23 16:30:00 crc kubenswrapper[4718]: I0123 16:30:00.402918 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6ndq\" (UniqueName: \"kubernetes.io/projected/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8-kube-api-access-c6ndq\") pod \"collect-profiles-29486430-k48z4\" (UID: \"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4" Jan 23 16:30:00 crc kubenswrapper[4718]: I0123 16:30:00.402985 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8-secret-volume\") pod \"collect-profiles-29486430-k48z4\" (UID: \"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4" Jan 23 16:30:00 crc kubenswrapper[4718]: I0123 16:30:00.403044 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8-config-volume\") pod \"collect-profiles-29486430-k48z4\" (UID: \"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4" Jan 23 16:30:00 crc kubenswrapper[4718]: I0123 16:30:00.404056 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8-config-volume\") pod \"collect-profiles-29486430-k48z4\" (UID: \"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4" Jan 23 16:30:00 crc kubenswrapper[4718]: I0123 16:30:00.410800 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8-secret-volume\") pod \"collect-profiles-29486430-k48z4\" (UID: \"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4" Jan 23 16:30:00 crc kubenswrapper[4718]: I0123 16:30:00.424231 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6ndq\" (UniqueName: \"kubernetes.io/projected/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8-kube-api-access-c6ndq\") pod \"collect-profiles-29486430-k48z4\" (UID: \"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4" Jan 23 16:30:00 crc kubenswrapper[4718]: I0123 16:30:00.504816 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4" Jan 23 16:30:00 crc kubenswrapper[4718]: I0123 16:30:00.983423 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4"] Jan 23 16:30:01 crc kubenswrapper[4718]: I0123 16:30:01.150234 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4faa264d-dd12-45b6-b533-b30d0f51f194" path="/var/lib/kubelet/pods/4faa264d-dd12-45b6-b533-b30d0f51f194/volumes" Jan 23 16:30:01 crc kubenswrapper[4718]: I0123 16:30:01.203599 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4" event={"ID":"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8","Type":"ContainerStarted","Data":"fbee9811e68b49242de9623807874fecd620cb8ef8425662623893402cd997bb"} Jan 23 16:30:01 crc kubenswrapper[4718]: I0123 16:30:01.203701 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4" event={"ID":"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8","Type":"ContainerStarted","Data":"878cc2b519661a1d3ea3c439aeae9228a13a3f0c8cf53dd839edf57d563a5d14"} Jan 23 16:30:01 crc kubenswrapper[4718]: I0123 16:30:01.232178 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4" podStartSLOduration=1.232151558 podStartE2EDuration="1.232151558s" podCreationTimestamp="2026-01-23 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:30:01.224232883 +0000 UTC m=+802.371474874" watchObservedRunningTime="2026-01-23 16:30:01.232151558 +0000 UTC m=+802.379393549" Jan 23 16:30:02 crc kubenswrapper[4718]: I0123 16:30:02.214735 4718 generic.go:334] "Generic (PLEG): container finished" podID="659d9e7c-d96f-4e98-b3b2-2c99f81d25c8" containerID="fbee9811e68b49242de9623807874fecd620cb8ef8425662623893402cd997bb" exitCode=0 Jan 23 16:30:02 crc kubenswrapper[4718]: I0123 16:30:02.214835 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4" event={"ID":"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8","Type":"ContainerDied","Data":"fbee9811e68b49242de9623807874fecd620cb8ef8425662623893402cd997bb"} Jan 23 16:30:07 crc kubenswrapper[4718]: I0123 16:30:07.529794 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4" Jan 23 16:30:07 crc kubenswrapper[4718]: I0123 16:30:07.668824 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8-config-volume\") pod \"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8\" (UID: \"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8\") " Jan 23 16:30:07 crc kubenswrapper[4718]: I0123 16:30:07.669459 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8-secret-volume\") pod \"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8\" (UID: \"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8\") " Jan 23 16:30:07 crc kubenswrapper[4718]: I0123 16:30:07.669791 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8-config-volume" (OuterVolumeSpecName: "config-volume") pod "659d9e7c-d96f-4e98-b3b2-2c99f81d25c8" (UID: "659d9e7c-d96f-4e98-b3b2-2c99f81d25c8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:30:07 crc kubenswrapper[4718]: I0123 16:30:07.669827 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6ndq\" (UniqueName: \"kubernetes.io/projected/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8-kube-api-access-c6ndq\") pod \"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8\" (UID: \"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8\") " Jan 23 16:30:07 crc kubenswrapper[4718]: I0123 16:30:07.670684 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 16:30:07 crc kubenswrapper[4718]: I0123 16:30:07.673794 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8-kube-api-access-c6ndq" (OuterVolumeSpecName: "kube-api-access-c6ndq") pod "659d9e7c-d96f-4e98-b3b2-2c99f81d25c8" (UID: "659d9e7c-d96f-4e98-b3b2-2c99f81d25c8"). InnerVolumeSpecName "kube-api-access-c6ndq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:30:07 crc kubenswrapper[4718]: I0123 16:30:07.673848 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "659d9e7c-d96f-4e98-b3b2-2c99f81d25c8" (UID: "659d9e7c-d96f-4e98-b3b2-2c99f81d25c8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:30:07 crc kubenswrapper[4718]: I0123 16:30:07.772892 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6ndq\" (UniqueName: \"kubernetes.io/projected/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8-kube-api-access-c6ndq\") on node \"crc\" DevicePath \"\"" Jan 23 16:30:07 crc kubenswrapper[4718]: I0123 16:30:07.772925 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 16:30:08 crc kubenswrapper[4718]: I0123 16:30:08.259580 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-b6hvd" event={"ID":"e45ebe67-eb65-4bf4-8d7b-f03a7113f22e","Type":"ContainerStarted","Data":"615672041fd23947ec5cc292eb8166aa9b8d5fadafda3daff98f0493057c207c"} Jan 23 16:30:08 crc kubenswrapper[4718]: I0123 16:30:08.261230 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4" event={"ID":"659d9e7c-d96f-4e98-b3b2-2c99f81d25c8","Type":"ContainerDied","Data":"878cc2b519661a1d3ea3c439aeae9228a13a3f0c8cf53dd839edf57d563a5d14"} Jan 23 16:30:08 crc kubenswrapper[4718]: I0123 16:30:08.261281 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="878cc2b519661a1d3ea3c439aeae9228a13a3f0c8cf53dd839edf57d563a5d14" Jan 23 16:30:08 crc kubenswrapper[4718]: I0123 16:30:08.261290 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4" Jan 23 16:30:08 crc kubenswrapper[4718]: I0123 16:30:08.295762 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-b6hvd" podStartSLOduration=1.922432756 podStartE2EDuration="9.295738779s" podCreationTimestamp="2026-01-23 16:29:59 +0000 UTC" firstStartedPulling="2026-01-23 16:30:00.192461662 +0000 UTC m=+801.339703663" lastFinishedPulling="2026-01-23 16:30:07.565767695 +0000 UTC m=+808.713009686" observedRunningTime="2026-01-23 16:30:08.290503717 +0000 UTC m=+809.437745718" watchObservedRunningTime="2026-01-23 16:30:08.295738779 +0000 UTC m=+809.442980770" Jan 23 16:30:28 crc kubenswrapper[4718]: I0123 16:30:28.875714 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:30:28 crc kubenswrapper[4718]: I0123 16:30:28.876257 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:30:41 crc kubenswrapper[4718]: I0123 16:30:41.045338 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2"] Jan 23 16:30:41 crc kubenswrapper[4718]: E0123 16:30:41.046472 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="659d9e7c-d96f-4e98-b3b2-2c99f81d25c8" containerName="collect-profiles" Jan 23 16:30:41 crc kubenswrapper[4718]: I0123 16:30:41.046488 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="659d9e7c-d96f-4e98-b3b2-2c99f81d25c8" containerName="collect-profiles" Jan 23 16:30:41 crc kubenswrapper[4718]: I0123 16:30:41.046658 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="659d9e7c-d96f-4e98-b3b2-2c99f81d25c8" containerName="collect-profiles" Jan 23 16:30:41 crc kubenswrapper[4718]: I0123 16:30:41.047797 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2" Jan 23 16:30:41 crc kubenswrapper[4718]: I0123 16:30:41.049700 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 16:30:41 crc kubenswrapper[4718]: I0123 16:30:41.114453 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m52hg\" (UniqueName: \"kubernetes.io/projected/f3a19209-a098-4d8f-8c8c-9b345cb3c185-kube-api-access-m52hg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2\" (UID: \"f3a19209-a098-4d8f-8c8c-9b345cb3c185\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2" Jan 23 16:30:41 crc kubenswrapper[4718]: I0123 16:30:41.114971 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f3a19209-a098-4d8f-8c8c-9b345cb3c185-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2\" (UID: \"f3a19209-a098-4d8f-8c8c-9b345cb3c185\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2" Jan 23 16:30:41 crc kubenswrapper[4718]: I0123 16:30:41.115035 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f3a19209-a098-4d8f-8c8c-9b345cb3c185-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2\" (UID: \"f3a19209-a098-4d8f-8c8c-9b345cb3c185\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2" Jan 23 16:30:41 crc kubenswrapper[4718]: I0123 16:30:41.133848 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2"] Jan 23 16:30:41 crc kubenswrapper[4718]: I0123 16:30:41.217316 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f3a19209-a098-4d8f-8c8c-9b345cb3c185-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2\" (UID: \"f3a19209-a098-4d8f-8c8c-9b345cb3c185\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2" Jan 23 16:30:41 crc kubenswrapper[4718]: I0123 16:30:41.217405 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f3a19209-a098-4d8f-8c8c-9b345cb3c185-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2\" (UID: \"f3a19209-a098-4d8f-8c8c-9b345cb3c185\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2" Jan 23 16:30:41 crc kubenswrapper[4718]: I0123 16:30:41.217470 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m52hg\" (UniqueName: \"kubernetes.io/projected/f3a19209-a098-4d8f-8c8c-9b345cb3c185-kube-api-access-m52hg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2\" (UID: \"f3a19209-a098-4d8f-8c8c-9b345cb3c185\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2" Jan 23 16:30:41 crc kubenswrapper[4718]: I0123 16:30:41.217995 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f3a19209-a098-4d8f-8c8c-9b345cb3c185-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2\" (UID: \"f3a19209-a098-4d8f-8c8c-9b345cb3c185\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2" Jan 23 16:30:41 crc kubenswrapper[4718]: I0123 16:30:41.218062 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f3a19209-a098-4d8f-8c8c-9b345cb3c185-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2\" (UID: \"f3a19209-a098-4d8f-8c8c-9b345cb3c185\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2" Jan 23 16:30:41 crc kubenswrapper[4718]: I0123 16:30:41.258361 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m52hg\" (UniqueName: \"kubernetes.io/projected/f3a19209-a098-4d8f-8c8c-9b345cb3c185-kube-api-access-m52hg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2\" (UID: \"f3a19209-a098-4d8f-8c8c-9b345cb3c185\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2" Jan 23 16:30:41 crc kubenswrapper[4718]: I0123 16:30:41.432270 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2" Jan 23 16:30:41 crc kubenswrapper[4718]: I0123 16:30:41.856697 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2"] Jan 23 16:30:42 crc kubenswrapper[4718]: I0123 16:30:42.553710 4718 generic.go:334] "Generic (PLEG): container finished" podID="f3a19209-a098-4d8f-8c8c-9b345cb3c185" containerID="40080ffcd6386799d73e350c1f10d32a187e90163dc8001f7cc0175c209bacea" exitCode=0 Jan 23 16:30:42 crc kubenswrapper[4718]: I0123 16:30:42.553834 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2" event={"ID":"f3a19209-a098-4d8f-8c8c-9b345cb3c185","Type":"ContainerDied","Data":"40080ffcd6386799d73e350c1f10d32a187e90163dc8001f7cc0175c209bacea"} Jan 23 16:30:42 crc kubenswrapper[4718]: I0123 16:30:42.554027 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2" event={"ID":"f3a19209-a098-4d8f-8c8c-9b345cb3c185","Type":"ContainerStarted","Data":"d6a3dd3b964acb24a18ee9ccc997609ad7fc0fd2c565f7ffd5ef9daf55986520"} Jan 23 16:30:43 crc kubenswrapper[4718]: I0123 16:30:43.395500 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zvjj2"] Jan 23 16:30:43 crc kubenswrapper[4718]: I0123 16:30:43.398944 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvjj2" Jan 23 16:30:43 crc kubenswrapper[4718]: I0123 16:30:43.409926 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zvjj2"] Jan 23 16:30:43 crc kubenswrapper[4718]: I0123 16:30:43.414138 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2422207f-15e3-46a5-8043-1fbb60283450-utilities\") pod \"redhat-operators-zvjj2\" (UID: \"2422207f-15e3-46a5-8043-1fbb60283450\") " pod="openshift-marketplace/redhat-operators-zvjj2" Jan 23 16:30:43 crc kubenswrapper[4718]: I0123 16:30:43.414223 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw8z8\" (UniqueName: \"kubernetes.io/projected/2422207f-15e3-46a5-8043-1fbb60283450-kube-api-access-sw8z8\") pod \"redhat-operators-zvjj2\" (UID: \"2422207f-15e3-46a5-8043-1fbb60283450\") " pod="openshift-marketplace/redhat-operators-zvjj2" Jan 23 16:30:43 crc kubenswrapper[4718]: I0123 16:30:43.414266 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2422207f-15e3-46a5-8043-1fbb60283450-catalog-content\") pod \"redhat-operators-zvjj2\" (UID: \"2422207f-15e3-46a5-8043-1fbb60283450\") " pod="openshift-marketplace/redhat-operators-zvjj2" Jan 23 16:30:43 crc kubenswrapper[4718]: I0123 16:30:43.516826 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw8z8\" (UniqueName: \"kubernetes.io/projected/2422207f-15e3-46a5-8043-1fbb60283450-kube-api-access-sw8z8\") pod \"redhat-operators-zvjj2\" (UID: \"2422207f-15e3-46a5-8043-1fbb60283450\") " pod="openshift-marketplace/redhat-operators-zvjj2" Jan 23 16:30:43 crc kubenswrapper[4718]: I0123 16:30:43.516913 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2422207f-15e3-46a5-8043-1fbb60283450-catalog-content\") pod \"redhat-operators-zvjj2\" (UID: \"2422207f-15e3-46a5-8043-1fbb60283450\") " pod="openshift-marketplace/redhat-operators-zvjj2" Jan 23 16:30:43 crc kubenswrapper[4718]: I0123 16:30:43.517021 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2422207f-15e3-46a5-8043-1fbb60283450-utilities\") pod \"redhat-operators-zvjj2\" (UID: \"2422207f-15e3-46a5-8043-1fbb60283450\") " pod="openshift-marketplace/redhat-operators-zvjj2" Jan 23 16:30:43 crc kubenswrapper[4718]: I0123 16:30:43.517704 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2422207f-15e3-46a5-8043-1fbb60283450-catalog-content\") pod \"redhat-operators-zvjj2\" (UID: \"2422207f-15e3-46a5-8043-1fbb60283450\") " pod="openshift-marketplace/redhat-operators-zvjj2" Jan 23 16:30:43 crc kubenswrapper[4718]: I0123 16:30:43.518196 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2422207f-15e3-46a5-8043-1fbb60283450-utilities\") pod \"redhat-operators-zvjj2\" (UID: \"2422207f-15e3-46a5-8043-1fbb60283450\") " pod="openshift-marketplace/redhat-operators-zvjj2" Jan 23 16:30:43 crc kubenswrapper[4718]: I0123 16:30:43.543383 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw8z8\" (UniqueName: \"kubernetes.io/projected/2422207f-15e3-46a5-8043-1fbb60283450-kube-api-access-sw8z8\") pod \"redhat-operators-zvjj2\" (UID: \"2422207f-15e3-46a5-8043-1fbb60283450\") " pod="openshift-marketplace/redhat-operators-zvjj2" Jan 23 16:30:43 crc kubenswrapper[4718]: I0123 16:30:43.721076 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvjj2" Jan 23 16:30:44 crc kubenswrapper[4718]: I0123 16:30:44.224660 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zvjj2"] Jan 23 16:30:44 crc kubenswrapper[4718]: W0123 16:30:44.226221 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2422207f_15e3_46a5_8043_1fbb60283450.slice/crio-219c2867767d888a574afaf2a79121421b41e32ce21df31864292b4360d14484 WatchSource:0}: Error finding container 219c2867767d888a574afaf2a79121421b41e32ce21df31864292b4360d14484: Status 404 returned error can't find the container with id 219c2867767d888a574afaf2a79121421b41e32ce21df31864292b4360d14484 Jan 23 16:30:44 crc kubenswrapper[4718]: I0123 16:30:44.570101 4718 generic.go:334] "Generic (PLEG): container finished" podID="f3a19209-a098-4d8f-8c8c-9b345cb3c185" containerID="d29808c7ac1d42b5d60310ae9c01229fb403c8f26c14295ea95565bfe5b01ce3" exitCode=0 Jan 23 16:30:44 crc kubenswrapper[4718]: I0123 16:30:44.570171 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2" event={"ID":"f3a19209-a098-4d8f-8c8c-9b345cb3c185","Type":"ContainerDied","Data":"d29808c7ac1d42b5d60310ae9c01229fb403c8f26c14295ea95565bfe5b01ce3"} Jan 23 16:30:44 crc kubenswrapper[4718]: I0123 16:30:44.572284 4718 generic.go:334] "Generic (PLEG): container finished" podID="2422207f-15e3-46a5-8043-1fbb60283450" containerID="418785db6fac2c25ce5eedbca4e84c768af0ef46cf92f003b5ffae1ad585254c" exitCode=0 Jan 23 16:30:44 crc kubenswrapper[4718]: I0123 16:30:44.572341 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjj2" event={"ID":"2422207f-15e3-46a5-8043-1fbb60283450","Type":"ContainerDied","Data":"418785db6fac2c25ce5eedbca4e84c768af0ef46cf92f003b5ffae1ad585254c"} Jan 23 16:30:44 crc kubenswrapper[4718]: I0123 16:30:44.572431 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjj2" event={"ID":"2422207f-15e3-46a5-8043-1fbb60283450","Type":"ContainerStarted","Data":"219c2867767d888a574afaf2a79121421b41e32ce21df31864292b4360d14484"} Jan 23 16:30:45 crc kubenswrapper[4718]: I0123 16:30:45.581220 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjj2" event={"ID":"2422207f-15e3-46a5-8043-1fbb60283450","Type":"ContainerStarted","Data":"74d5998325f5c9edb8d0c7088ad832c23d1d0ba8cf9a41eeebd38fd454fcd0df"} Jan 23 16:30:45 crc kubenswrapper[4718]: I0123 16:30:45.583824 4718 generic.go:334] "Generic (PLEG): container finished" podID="f3a19209-a098-4d8f-8c8c-9b345cb3c185" containerID="2cd8e15a99aee0c63aa172a1e88d76a2b0ff79fbd21e599e517c6873001c049a" exitCode=0 Jan 23 16:30:45 crc kubenswrapper[4718]: I0123 16:30:45.583876 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2" event={"ID":"f3a19209-a098-4d8f-8c8c-9b345cb3c185","Type":"ContainerDied","Data":"2cd8e15a99aee0c63aa172a1e88d76a2b0ff79fbd21e599e517c6873001c049a"} Jan 23 16:30:46 crc kubenswrapper[4718]: I0123 16:30:46.595531 4718 generic.go:334] "Generic (PLEG): container finished" podID="2422207f-15e3-46a5-8043-1fbb60283450" containerID="74d5998325f5c9edb8d0c7088ad832c23d1d0ba8cf9a41eeebd38fd454fcd0df" exitCode=0 Jan 23 16:30:46 crc kubenswrapper[4718]: I0123 16:30:46.595671 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjj2" event={"ID":"2422207f-15e3-46a5-8043-1fbb60283450","Type":"ContainerDied","Data":"74d5998325f5c9edb8d0c7088ad832c23d1d0ba8cf9a41eeebd38fd454fcd0df"} Jan 23 16:30:46 crc kubenswrapper[4718]: I0123 16:30:46.990552 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2" Jan 23 16:30:47 crc kubenswrapper[4718]: I0123 16:30:47.191519 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m52hg\" (UniqueName: \"kubernetes.io/projected/f3a19209-a098-4d8f-8c8c-9b345cb3c185-kube-api-access-m52hg\") pod \"f3a19209-a098-4d8f-8c8c-9b345cb3c185\" (UID: \"f3a19209-a098-4d8f-8c8c-9b345cb3c185\") " Jan 23 16:30:47 crc kubenswrapper[4718]: I0123 16:30:47.191798 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f3a19209-a098-4d8f-8c8c-9b345cb3c185-bundle\") pod \"f3a19209-a098-4d8f-8c8c-9b345cb3c185\" (UID: \"f3a19209-a098-4d8f-8c8c-9b345cb3c185\") " Jan 23 16:30:47 crc kubenswrapper[4718]: I0123 16:30:47.191998 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f3a19209-a098-4d8f-8c8c-9b345cb3c185-util\") pod \"f3a19209-a098-4d8f-8c8c-9b345cb3c185\" (UID: \"f3a19209-a098-4d8f-8c8c-9b345cb3c185\") " Jan 23 16:30:47 crc kubenswrapper[4718]: I0123 16:30:47.192551 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3a19209-a098-4d8f-8c8c-9b345cb3c185-bundle" (OuterVolumeSpecName: "bundle") pod "f3a19209-a098-4d8f-8c8c-9b345cb3c185" (UID: "f3a19209-a098-4d8f-8c8c-9b345cb3c185"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:30:47 crc kubenswrapper[4718]: I0123 16:30:47.195092 4718 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f3a19209-a098-4d8f-8c8c-9b345cb3c185-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:30:47 crc kubenswrapper[4718]: I0123 16:30:47.205146 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a19209-a098-4d8f-8c8c-9b345cb3c185-kube-api-access-m52hg" (OuterVolumeSpecName: "kube-api-access-m52hg") pod "f3a19209-a098-4d8f-8c8c-9b345cb3c185" (UID: "f3a19209-a098-4d8f-8c8c-9b345cb3c185"). InnerVolumeSpecName "kube-api-access-m52hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:30:47 crc kubenswrapper[4718]: I0123 16:30:47.224395 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3a19209-a098-4d8f-8c8c-9b345cb3c185-util" (OuterVolumeSpecName: "util") pod "f3a19209-a098-4d8f-8c8c-9b345cb3c185" (UID: "f3a19209-a098-4d8f-8c8c-9b345cb3c185"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:30:47 crc kubenswrapper[4718]: I0123 16:30:47.297209 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m52hg\" (UniqueName: \"kubernetes.io/projected/f3a19209-a098-4d8f-8c8c-9b345cb3c185-kube-api-access-m52hg\") on node \"crc\" DevicePath \"\"" Jan 23 16:30:47 crc kubenswrapper[4718]: I0123 16:30:47.297263 4718 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f3a19209-a098-4d8f-8c8c-9b345cb3c185-util\") on node \"crc\" DevicePath \"\"" Jan 23 16:30:47 crc kubenswrapper[4718]: I0123 16:30:47.603984 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2" event={"ID":"f3a19209-a098-4d8f-8c8c-9b345cb3c185","Type":"ContainerDied","Data":"d6a3dd3b964acb24a18ee9ccc997609ad7fc0fd2c565f7ffd5ef9daf55986520"} Jan 23 16:30:47 crc kubenswrapper[4718]: I0123 16:30:47.604027 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6a3dd3b964acb24a18ee9ccc997609ad7fc0fd2c565f7ffd5ef9daf55986520" Jan 23 16:30:47 crc kubenswrapper[4718]: I0123 16:30:47.604037 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2" Jan 23 16:30:47 crc kubenswrapper[4718]: I0123 16:30:47.606262 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjj2" event={"ID":"2422207f-15e3-46a5-8043-1fbb60283450","Type":"ContainerStarted","Data":"246f9a10c2b802b34b674bf95f2a55e8e76593d2b57875645c6185a552884984"} Jan 23 16:30:47 crc kubenswrapper[4718]: I0123 16:30:47.629498 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zvjj2" podStartSLOduration=1.878646808 podStartE2EDuration="4.629473709s" podCreationTimestamp="2026-01-23 16:30:43 +0000 UTC" firstStartedPulling="2026-01-23 16:30:44.574430716 +0000 UTC m=+845.721672717" lastFinishedPulling="2026-01-23 16:30:47.325257627 +0000 UTC m=+848.472499618" observedRunningTime="2026-01-23 16:30:47.626092247 +0000 UTC m=+848.773334238" watchObservedRunningTime="2026-01-23 16:30:47.629473709 +0000 UTC m=+848.776715690" Jan 23 16:30:51 crc kubenswrapper[4718]: I0123 16:30:51.254496 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-q2n5g"] Jan 23 16:30:51 crc kubenswrapper[4718]: E0123 16:30:51.255283 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a19209-a098-4d8f-8c8c-9b345cb3c185" containerName="pull" Jan 23 16:30:51 crc kubenswrapper[4718]: I0123 16:30:51.255296 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a19209-a098-4d8f-8c8c-9b345cb3c185" containerName="pull" Jan 23 16:30:51 crc kubenswrapper[4718]: E0123 16:30:51.255310 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a19209-a098-4d8f-8c8c-9b345cb3c185" containerName="extract" Jan 23 16:30:51 crc kubenswrapper[4718]: I0123 16:30:51.255315 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a19209-a098-4d8f-8c8c-9b345cb3c185" containerName="extract" Jan 23 16:30:51 crc kubenswrapper[4718]: E0123 16:30:51.255328 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a19209-a098-4d8f-8c8c-9b345cb3c185" containerName="util" Jan 23 16:30:51 crc kubenswrapper[4718]: I0123 16:30:51.255334 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a19209-a098-4d8f-8c8c-9b345cb3c185" containerName="util" Jan 23 16:30:51 crc kubenswrapper[4718]: I0123 16:30:51.255480 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a19209-a098-4d8f-8c8c-9b345cb3c185" containerName="extract" Jan 23 16:30:51 crc kubenswrapper[4718]: I0123 16:30:51.256108 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-q2n5g" Jan 23 16:30:51 crc kubenswrapper[4718]: I0123 16:30:51.258011 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-x7s8z" Jan 23 16:30:51 crc kubenswrapper[4718]: I0123 16:30:51.258779 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 23 16:30:51 crc kubenswrapper[4718]: I0123 16:30:51.260165 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 23 16:30:51 crc kubenswrapper[4718]: I0123 16:30:51.270528 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-q2n5g"] Jan 23 16:30:51 crc kubenswrapper[4718]: I0123 16:30:51.398111 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69fb2\" (UniqueName: \"kubernetes.io/projected/1ae3f970-005a-47f5-9539-ba299ac76301-kube-api-access-69fb2\") pod \"nmstate-operator-646758c888-q2n5g\" (UID: \"1ae3f970-005a-47f5-9539-ba299ac76301\") " pod="openshift-nmstate/nmstate-operator-646758c888-q2n5g" Jan 23 16:30:51 crc kubenswrapper[4718]: I0123 16:30:51.499958 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69fb2\" (UniqueName: \"kubernetes.io/projected/1ae3f970-005a-47f5-9539-ba299ac76301-kube-api-access-69fb2\") pod \"nmstate-operator-646758c888-q2n5g\" (UID: \"1ae3f970-005a-47f5-9539-ba299ac76301\") " pod="openshift-nmstate/nmstate-operator-646758c888-q2n5g" Jan 23 16:30:51 crc kubenswrapper[4718]: I0123 16:30:51.537036 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69fb2\" (UniqueName: \"kubernetes.io/projected/1ae3f970-005a-47f5-9539-ba299ac76301-kube-api-access-69fb2\") pod \"nmstate-operator-646758c888-q2n5g\" (UID: \"1ae3f970-005a-47f5-9539-ba299ac76301\") " pod="openshift-nmstate/nmstate-operator-646758c888-q2n5g" Jan 23 16:30:51 crc kubenswrapper[4718]: I0123 16:30:51.586843 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-q2n5g" Jan 23 16:30:52 crc kubenswrapper[4718]: I0123 16:30:52.138157 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-q2n5g"] Jan 23 16:30:52 crc kubenswrapper[4718]: W0123 16:30:52.148035 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ae3f970_005a_47f5_9539_ba299ac76301.slice/crio-aec8c37fcf3b1d8951c31d1a3eeec457e4044abf41009d40c8905efd2862b66b WatchSource:0}: Error finding container aec8c37fcf3b1d8951c31d1a3eeec457e4044abf41009d40c8905efd2862b66b: Status 404 returned error can't find the container with id aec8c37fcf3b1d8951c31d1a3eeec457e4044abf41009d40c8905efd2862b66b Jan 23 16:30:52 crc kubenswrapper[4718]: I0123 16:30:52.663300 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-q2n5g" event={"ID":"1ae3f970-005a-47f5-9539-ba299ac76301","Type":"ContainerStarted","Data":"aec8c37fcf3b1d8951c31d1a3eeec457e4044abf41009d40c8905efd2862b66b"} Jan 23 16:30:53 crc kubenswrapper[4718]: I0123 16:30:53.722010 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zvjj2" Jan 23 16:30:53 crc kubenswrapper[4718]: I0123 16:30:53.727166 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zvjj2" Jan 23 16:30:54 crc kubenswrapper[4718]: I0123 16:30:54.788529 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zvjj2" podUID="2422207f-15e3-46a5-8043-1fbb60283450" containerName="registry-server" probeResult="failure" output=< Jan 23 16:30:54 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Jan 23 16:30:54 crc kubenswrapper[4718]: > Jan 23 16:30:55 crc kubenswrapper[4718]: I0123 16:30:55.690904 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-q2n5g" event={"ID":"1ae3f970-005a-47f5-9539-ba299ac76301","Type":"ContainerStarted","Data":"bb025a20df6d670c1b02b167f5b11716daa02b084b4ef857a0b5095ef89dab35"} Jan 23 16:30:55 crc kubenswrapper[4718]: I0123 16:30:55.714339 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-q2n5g" podStartSLOduration=2.107129755 podStartE2EDuration="4.714310458s" podCreationTimestamp="2026-01-23 16:30:51 +0000 UTC" firstStartedPulling="2026-01-23 16:30:52.150480065 +0000 UTC m=+853.297722056" lastFinishedPulling="2026-01-23 16:30:54.757660768 +0000 UTC m=+855.904902759" observedRunningTime="2026-01-23 16:30:55.71143169 +0000 UTC m=+856.858673681" watchObservedRunningTime="2026-01-23 16:30:55.714310458 +0000 UTC m=+856.861552459" Jan 23 16:30:56 crc kubenswrapper[4718]: I0123 16:30:56.793239 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pg4mx"] Jan 23 16:30:56 crc kubenswrapper[4718]: I0123 16:30:56.797062 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg4mx" Jan 23 16:30:56 crc kubenswrapper[4718]: I0123 16:30:56.805549 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg4mx"] Jan 23 16:30:56 crc kubenswrapper[4718]: I0123 16:30:56.895998 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml5fh\" (UniqueName: \"kubernetes.io/projected/a295317a-5807-4be4-8700-5de8b08e5975-kube-api-access-ml5fh\") pod \"redhat-marketplace-pg4mx\" (UID: \"a295317a-5807-4be4-8700-5de8b08e5975\") " pod="openshift-marketplace/redhat-marketplace-pg4mx" Jan 23 16:30:56 crc kubenswrapper[4718]: I0123 16:30:56.896064 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a295317a-5807-4be4-8700-5de8b08e5975-utilities\") pod \"redhat-marketplace-pg4mx\" (UID: \"a295317a-5807-4be4-8700-5de8b08e5975\") " pod="openshift-marketplace/redhat-marketplace-pg4mx" Jan 23 16:30:56 crc kubenswrapper[4718]: I0123 16:30:56.896091 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a295317a-5807-4be4-8700-5de8b08e5975-catalog-content\") pod \"redhat-marketplace-pg4mx\" (UID: \"a295317a-5807-4be4-8700-5de8b08e5975\") " pod="openshift-marketplace/redhat-marketplace-pg4mx" Jan 23 16:30:56 crc kubenswrapper[4718]: I0123 16:30:56.998311 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml5fh\" (UniqueName: \"kubernetes.io/projected/a295317a-5807-4be4-8700-5de8b08e5975-kube-api-access-ml5fh\") pod \"redhat-marketplace-pg4mx\" (UID: \"a295317a-5807-4be4-8700-5de8b08e5975\") " pod="openshift-marketplace/redhat-marketplace-pg4mx" Jan 23 16:30:56 crc kubenswrapper[4718]: I0123 16:30:56.998359 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a295317a-5807-4be4-8700-5de8b08e5975-utilities\") pod \"redhat-marketplace-pg4mx\" (UID: \"a295317a-5807-4be4-8700-5de8b08e5975\") " pod="openshift-marketplace/redhat-marketplace-pg4mx" Jan 23 16:30:56 crc kubenswrapper[4718]: I0123 16:30:56.998384 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a295317a-5807-4be4-8700-5de8b08e5975-catalog-content\") pod \"redhat-marketplace-pg4mx\" (UID: \"a295317a-5807-4be4-8700-5de8b08e5975\") " pod="openshift-marketplace/redhat-marketplace-pg4mx" Jan 23 16:30:56 crc kubenswrapper[4718]: I0123 16:30:56.998907 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a295317a-5807-4be4-8700-5de8b08e5975-catalog-content\") pod \"redhat-marketplace-pg4mx\" (UID: \"a295317a-5807-4be4-8700-5de8b08e5975\") " pod="openshift-marketplace/redhat-marketplace-pg4mx" Jan 23 16:30:56 crc kubenswrapper[4718]: I0123 16:30:56.999095 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a295317a-5807-4be4-8700-5de8b08e5975-utilities\") pod \"redhat-marketplace-pg4mx\" (UID: \"a295317a-5807-4be4-8700-5de8b08e5975\") " pod="openshift-marketplace/redhat-marketplace-pg4mx" Jan 23 16:30:57 crc kubenswrapper[4718]: I0123 16:30:57.018406 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml5fh\" (UniqueName: \"kubernetes.io/projected/a295317a-5807-4be4-8700-5de8b08e5975-kube-api-access-ml5fh\") pod \"redhat-marketplace-pg4mx\" (UID: \"a295317a-5807-4be4-8700-5de8b08e5975\") " pod="openshift-marketplace/redhat-marketplace-pg4mx" Jan 23 16:30:57 crc kubenswrapper[4718]: I0123 16:30:57.133963 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg4mx" Jan 23 16:30:57 crc kubenswrapper[4718]: I0123 16:30:57.424548 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg4mx"] Jan 23 16:30:57 crc kubenswrapper[4718]: I0123 16:30:57.706908 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg4mx" event={"ID":"a295317a-5807-4be4-8700-5de8b08e5975","Type":"ContainerStarted","Data":"8de3cf88550f126aebd2ccdad820c57ad336a2e3bd280ca248b6d2ffeb8e2012"} Jan 23 16:30:58 crc kubenswrapper[4718]: I0123 16:30:58.715846 4718 generic.go:334] "Generic (PLEG): container finished" podID="a295317a-5807-4be4-8700-5de8b08e5975" containerID="2b5e388021705635647ef7b599adeea4f6f9cd2443cb6248e0585206cc3339ca" exitCode=0 Jan 23 16:30:58 crc kubenswrapper[4718]: I0123 16:30:58.715944 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg4mx" event={"ID":"a295317a-5807-4be4-8700-5de8b08e5975","Type":"ContainerDied","Data":"2b5e388021705635647ef7b599adeea4f6f9cd2443cb6248e0585206cc3339ca"} Jan 23 16:30:58 crc kubenswrapper[4718]: I0123 16:30:58.875600 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:30:58 crc kubenswrapper[4718]: I0123 16:30:58.875687 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:30:58 crc kubenswrapper[4718]: I0123 16:30:58.875738 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:30:58 crc kubenswrapper[4718]: I0123 16:30:58.876413 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"93a698fd9a68d5119c2aead8c4e3dde081f70d298b44f30d7bda86aad4caf6b2"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 16:30:58 crc kubenswrapper[4718]: I0123 16:30:58.876479 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://93a698fd9a68d5119c2aead8c4e3dde081f70d298b44f30d7bda86aad4caf6b2" gracePeriod=600 Jan 23 16:30:59 crc kubenswrapper[4718]: I0123 16:30:59.731478 4718 generic.go:334] "Generic (PLEG): container finished" podID="a295317a-5807-4be4-8700-5de8b08e5975" containerID="689629a9eaa1761f7a59c7852e227d377e90c65aedf7990993b7613abf72a6ba" exitCode=0 Jan 23 16:30:59 crc kubenswrapper[4718]: I0123 16:30:59.731846 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg4mx" event={"ID":"a295317a-5807-4be4-8700-5de8b08e5975","Type":"ContainerDied","Data":"689629a9eaa1761f7a59c7852e227d377e90c65aedf7990993b7613abf72a6ba"} Jan 23 16:30:59 crc kubenswrapper[4718]: I0123 16:30:59.738937 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="93a698fd9a68d5119c2aead8c4e3dde081f70d298b44f30d7bda86aad4caf6b2" exitCode=0 Jan 23 16:30:59 crc kubenswrapper[4718]: I0123 16:30:59.739039 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"93a698fd9a68d5119c2aead8c4e3dde081f70d298b44f30d7bda86aad4caf6b2"} Jan 23 16:30:59 crc kubenswrapper[4718]: I0123 16:30:59.739099 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"bd99bd4b2d73295643906a9aa8c3e87cbbb0c2a9c5d2e4b829796f2135ed44c3"} Jan 23 16:30:59 crc kubenswrapper[4718]: I0123 16:30:59.739149 4718 scope.go:117] "RemoveContainer" containerID="7d6fc078511f90cfb7ae2b356622e186c6cc3d8dadc1f9dd98c3eb3e0635e278" Jan 23 16:31:00 crc kubenswrapper[4718]: I0123 16:31:00.751116 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg4mx" event={"ID":"a295317a-5807-4be4-8700-5de8b08e5975","Type":"ContainerStarted","Data":"26883940149ea9d607e83198a2f05bcbc25535297c68d9ed2a74bdaba158baa8"} Jan 23 16:31:00 crc kubenswrapper[4718]: I0123 16:31:00.770744 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pg4mx" podStartSLOduration=3.368971695 podStartE2EDuration="4.770724539s" podCreationTimestamp="2026-01-23 16:30:56 +0000 UTC" firstStartedPulling="2026-01-23 16:30:58.71812093 +0000 UTC m=+859.865362921" lastFinishedPulling="2026-01-23 16:31:00.119873754 +0000 UTC m=+861.267115765" observedRunningTime="2026-01-23 16:31:00.769008212 +0000 UTC m=+861.916250203" watchObservedRunningTime="2026-01-23 16:31:00.770724539 +0000 UTC m=+861.917966530" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.486532 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-4r42t"] Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.488710 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-4r42t" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.493599 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-4r42t"] Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.501024 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-hbmxh"] Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.502244 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hbmxh" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.503960 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-bw8hs" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.509819 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hqbgx"] Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.511101 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hqbgx" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.518014 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.566280 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hqbgx"] Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.622037 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brsn8\" (UniqueName: \"kubernetes.io/projected/c66f413f-8a00-4526-b93f-4d739aec140c-kube-api-access-brsn8\") pod \"nmstate-handler-hbmxh\" (UID: \"c66f413f-8a00-4526-b93f-4d739aec140c\") " pod="openshift-nmstate/nmstate-handler-hbmxh" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.622134 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c66f413f-8a00-4526-b93f-4d739aec140c-dbus-socket\") pod \"nmstate-handler-hbmxh\" (UID: \"c66f413f-8a00-4526-b93f-4d739aec140c\") " pod="openshift-nmstate/nmstate-handler-hbmxh" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.622165 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c66f413f-8a00-4526-b93f-4d739aec140c-ovs-socket\") pod \"nmstate-handler-hbmxh\" (UID: \"c66f413f-8a00-4526-b93f-4d739aec140c\") " pod="openshift-nmstate/nmstate-handler-hbmxh" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.622206 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtk2k\" (UniqueName: \"kubernetes.io/projected/c9ada4d9-34eb-43fb-a0ba-09b879eab797-kube-api-access-vtk2k\") pod \"nmstate-metrics-54757c584b-4r42t\" (UID: \"c9ada4d9-34eb-43fb-a0ba-09b879eab797\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-4r42t" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.622259 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c66f413f-8a00-4526-b93f-4d739aec140c-nmstate-lock\") pod \"nmstate-handler-hbmxh\" (UID: \"c66f413f-8a00-4526-b93f-4d739aec140c\") " pod="openshift-nmstate/nmstate-handler-hbmxh" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.622312 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9d41c1ee-b304-42c0-a2e7-2fe83315a430-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hqbgx\" (UID: \"9d41c1ee-b304-42c0-a2e7-2fe83315a430\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hqbgx" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.622336 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27dhn\" (UniqueName: \"kubernetes.io/projected/9d41c1ee-b304-42c0-a2e7-2fe83315a430-kube-api-access-27dhn\") pod \"nmstate-webhook-8474b5b9d8-hqbgx\" (UID: \"9d41c1ee-b304-42c0-a2e7-2fe83315a430\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hqbgx" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.715584 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2h982"] Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.717529 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2h982" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.722423 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ztf67" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.722758 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.722954 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.724132 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brsn8\" (UniqueName: \"kubernetes.io/projected/c66f413f-8a00-4526-b93f-4d739aec140c-kube-api-access-brsn8\") pod \"nmstate-handler-hbmxh\" (UID: \"c66f413f-8a00-4526-b93f-4d739aec140c\") " pod="openshift-nmstate/nmstate-handler-hbmxh" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.724476 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c66f413f-8a00-4526-b93f-4d739aec140c-dbus-socket\") pod \"nmstate-handler-hbmxh\" (UID: \"c66f413f-8a00-4526-b93f-4d739aec140c\") " pod="openshift-nmstate/nmstate-handler-hbmxh" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.724545 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c66f413f-8a00-4526-b93f-4d739aec140c-ovs-socket\") pod \"nmstate-handler-hbmxh\" (UID: \"c66f413f-8a00-4526-b93f-4d739aec140c\") " pod="openshift-nmstate/nmstate-handler-hbmxh" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.724649 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtk2k\" (UniqueName: \"kubernetes.io/projected/c9ada4d9-34eb-43fb-a0ba-09b879eab797-kube-api-access-vtk2k\") pod \"nmstate-metrics-54757c584b-4r42t\" (UID: \"c9ada4d9-34eb-43fb-a0ba-09b879eab797\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-4r42t" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.724743 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c66f413f-8a00-4526-b93f-4d739aec140c-nmstate-lock\") pod \"nmstate-handler-hbmxh\" (UID: \"c66f413f-8a00-4526-b93f-4d739aec140c\") " pod="openshift-nmstate/nmstate-handler-hbmxh" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.724839 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9d41c1ee-b304-42c0-a2e7-2fe83315a430-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hqbgx\" (UID: \"9d41c1ee-b304-42c0-a2e7-2fe83315a430\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hqbgx" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.724915 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27dhn\" (UniqueName: \"kubernetes.io/projected/9d41c1ee-b304-42c0-a2e7-2fe83315a430-kube-api-access-27dhn\") pod \"nmstate-webhook-8474b5b9d8-hqbgx\" (UID: \"9d41c1ee-b304-42c0-a2e7-2fe83315a430\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hqbgx" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.725550 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c66f413f-8a00-4526-b93f-4d739aec140c-ovs-socket\") pod \"nmstate-handler-hbmxh\" (UID: \"c66f413f-8a00-4526-b93f-4d739aec140c\") " pod="openshift-nmstate/nmstate-handler-hbmxh" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.725569 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c66f413f-8a00-4526-b93f-4d739aec140c-dbus-socket\") pod \"nmstate-handler-hbmxh\" (UID: \"c66f413f-8a00-4526-b93f-4d739aec140c\") " pod="openshift-nmstate/nmstate-handler-hbmxh" Jan 23 16:31:02 crc kubenswrapper[4718]: E0123 16:31:02.725778 4718 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 23 16:31:02 crc kubenswrapper[4718]: E0123 16:31:02.725842 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d41c1ee-b304-42c0-a2e7-2fe83315a430-tls-key-pair podName:9d41c1ee-b304-42c0-a2e7-2fe83315a430 nodeName:}" failed. No retries permitted until 2026-01-23 16:31:03.225819912 +0000 UTC m=+864.373061893 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/9d41c1ee-b304-42c0-a2e7-2fe83315a430-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-hqbgx" (UID: "9d41c1ee-b304-42c0-a2e7-2fe83315a430") : secret "openshift-nmstate-webhook" not found Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.734244 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2h982"] Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.735618 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c66f413f-8a00-4526-b93f-4d739aec140c-nmstate-lock\") pod \"nmstate-handler-hbmxh\" (UID: \"c66f413f-8a00-4526-b93f-4d739aec140c\") " pod="openshift-nmstate/nmstate-handler-hbmxh" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.752077 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brsn8\" (UniqueName: \"kubernetes.io/projected/c66f413f-8a00-4526-b93f-4d739aec140c-kube-api-access-brsn8\") pod \"nmstate-handler-hbmxh\" (UID: \"c66f413f-8a00-4526-b93f-4d739aec140c\") " pod="openshift-nmstate/nmstate-handler-hbmxh" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.752299 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtk2k\" (UniqueName: \"kubernetes.io/projected/c9ada4d9-34eb-43fb-a0ba-09b879eab797-kube-api-access-vtk2k\") pod \"nmstate-metrics-54757c584b-4r42t\" (UID: \"c9ada4d9-34eb-43fb-a0ba-09b879eab797\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-4r42t" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.785789 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27dhn\" (UniqueName: \"kubernetes.io/projected/9d41c1ee-b304-42c0-a2e7-2fe83315a430-kube-api-access-27dhn\") pod \"nmstate-webhook-8474b5b9d8-hqbgx\" (UID: \"9d41c1ee-b304-42c0-a2e7-2fe83315a430\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hqbgx" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.829943 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-4r42t" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.850380 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ff516ae-ef38-4eb8-9721-b5e809fa1a53-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2h982\" (UID: \"4ff516ae-ef38-4eb8-9721-b5e809fa1a53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2h982" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.850557 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zmvj\" (UniqueName: \"kubernetes.io/projected/4ff516ae-ef38-4eb8-9721-b5e809fa1a53-kube-api-access-7zmvj\") pod \"nmstate-console-plugin-7754f76f8b-2h982\" (UID: \"4ff516ae-ef38-4eb8-9721-b5e809fa1a53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2h982" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.852415 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hbmxh" Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.897125 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4ff516ae-ef38-4eb8-9721-b5e809fa1a53-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2h982\" (UID: \"4ff516ae-ef38-4eb8-9721-b5e809fa1a53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2h982" Jan 23 16:31:02 crc kubenswrapper[4718]: W0123 16:31:02.916343 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc66f413f_8a00_4526_b93f_4d739aec140c.slice/crio-abbe19a0df6faf566e0f9ca921a43d246903a56c60ef3bcd5272bb64e30dd02e WatchSource:0}: Error finding container abbe19a0df6faf566e0f9ca921a43d246903a56c60ef3bcd5272bb64e30dd02e: Status 404 returned error can't find the container with id abbe19a0df6faf566e0f9ca921a43d246903a56c60ef3bcd5272bb64e30dd02e Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.974604 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69ccdfb68b-l4gxm"] Jan 23 16:31:02 crc kubenswrapper[4718]: I0123 16:31:02.975839 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:02.999947 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ff516ae-ef38-4eb8-9721-b5e809fa1a53-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2h982\" (UID: \"4ff516ae-ef38-4eb8-9721-b5e809fa1a53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2h982" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.000042 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zmvj\" (UniqueName: \"kubernetes.io/projected/4ff516ae-ef38-4eb8-9721-b5e809fa1a53-kube-api-access-7zmvj\") pod \"nmstate-console-plugin-7754f76f8b-2h982\" (UID: \"4ff516ae-ef38-4eb8-9721-b5e809fa1a53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2h982" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.000085 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4ff516ae-ef38-4eb8-9721-b5e809fa1a53-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2h982\" (UID: \"4ff516ae-ef38-4eb8-9721-b5e809fa1a53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2h982" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.001020 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4ff516ae-ef38-4eb8-9721-b5e809fa1a53-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2h982\" (UID: \"4ff516ae-ef38-4eb8-9721-b5e809fa1a53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2h982" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.004289 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69ccdfb68b-l4gxm"] Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.005451 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ff516ae-ef38-4eb8-9721-b5e809fa1a53-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2h982\" (UID: \"4ff516ae-ef38-4eb8-9721-b5e809fa1a53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2h982" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.018774 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zmvj\" (UniqueName: \"kubernetes.io/projected/4ff516ae-ef38-4eb8-9721-b5e809fa1a53-kube-api-access-7zmvj\") pod \"nmstate-console-plugin-7754f76f8b-2h982\" (UID: \"4ff516ae-ef38-4eb8-9721-b5e809fa1a53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2h982" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.101744 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac65643e-309d-4ea6-a522-ab62f944c544-console-serving-cert\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.101789 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2stz\" (UniqueName: \"kubernetes.io/projected/ac65643e-309d-4ea6-a522-ab62f944c544-kube-api-access-s2stz\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.101836 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-console-config\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.101870 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-oauth-serving-cert\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.102067 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac65643e-309d-4ea6-a522-ab62f944c544-console-oauth-config\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.102110 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-trusted-ca-bundle\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.102336 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-service-ca\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.203753 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac65643e-309d-4ea6-a522-ab62f944c544-console-serving-cert\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.204113 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2stz\" (UniqueName: \"kubernetes.io/projected/ac65643e-309d-4ea6-a522-ab62f944c544-kube-api-access-s2stz\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.204163 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-console-config\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.204202 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-oauth-serving-cert\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.204233 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac65643e-309d-4ea6-a522-ab62f944c544-console-oauth-config\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.204248 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-trusted-ca-bundle\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.204307 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-service-ca\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.205328 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-oauth-serving-cert\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.205597 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-console-config\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.206190 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-service-ca\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.206597 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-trusted-ca-bundle\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.217246 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac65643e-309d-4ea6-a522-ab62f944c544-console-oauth-config\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.221407 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac65643e-309d-4ea6-a522-ab62f944c544-console-serving-cert\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.222256 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2stz\" (UniqueName: \"kubernetes.io/projected/ac65643e-309d-4ea6-a522-ab62f944c544-kube-api-access-s2stz\") pod \"console-69ccdfb68b-l4gxm\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.253137 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2h982" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.306776 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9d41c1ee-b304-42c0-a2e7-2fe83315a430-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hqbgx\" (UID: \"9d41c1ee-b304-42c0-a2e7-2fe83315a430\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hqbgx" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.318483 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9d41c1ee-b304-42c0-a2e7-2fe83315a430-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hqbgx\" (UID: \"9d41c1ee-b304-42c0-a2e7-2fe83315a430\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hqbgx" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.336146 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.448800 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-4r42t"] Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.498151 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hqbgx" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.548223 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2h982"] Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.767609 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hqbgx"] Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.787009 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zvjj2" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.835503 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hbmxh" event={"ID":"c66f413f-8a00-4526-b93f-4d739aec140c","Type":"ContainerStarted","Data":"abbe19a0df6faf566e0f9ca921a43d246903a56c60ef3bcd5272bb64e30dd02e"} Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.837299 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2h982" event={"ID":"4ff516ae-ef38-4eb8-9721-b5e809fa1a53","Type":"ContainerStarted","Data":"e88bb68b383c8b24bcbb9fb4bbf95a6ff68bec7442a214a97b52f374fc6cc251"} Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.838405 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hqbgx" event={"ID":"9d41c1ee-b304-42c0-a2e7-2fe83315a430","Type":"ContainerStarted","Data":"3dfdc5d685bf37fd92f141e9c4fa31075369b2a9d56a2b110de836717947514d"} Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.839930 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-4r42t" event={"ID":"c9ada4d9-34eb-43fb-a0ba-09b879eab797","Type":"ContainerStarted","Data":"0c51ccfe39b31a3bc21d09ecdb1f54d1d7bd489a8606f8b3f1f51b9eb8cfefab"} Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.859002 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zvjj2" Jan 23 16:31:03 crc kubenswrapper[4718]: I0123 16:31:03.881935 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69ccdfb68b-l4gxm"] Jan 23 16:31:04 crc kubenswrapper[4718]: I0123 16:31:04.025706 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zvjj2"] Jan 23 16:31:04 crc kubenswrapper[4718]: I0123 16:31:04.855042 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69ccdfb68b-l4gxm" event={"ID":"ac65643e-309d-4ea6-a522-ab62f944c544","Type":"ContainerStarted","Data":"99d2c014b42402883f9ab5f9734accca712e7d99a3945a0a1a02979c81eead3e"} Jan 23 16:31:04 crc kubenswrapper[4718]: I0123 16:31:04.855107 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69ccdfb68b-l4gxm" event={"ID":"ac65643e-309d-4ea6-a522-ab62f944c544","Type":"ContainerStarted","Data":"6eac19d948e947a00083ea470649bd4e822672c7994735ecf464e43cb7319aea"} Jan 23 16:31:04 crc kubenswrapper[4718]: I0123 16:31:04.855305 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zvjj2" podUID="2422207f-15e3-46a5-8043-1fbb60283450" containerName="registry-server" containerID="cri-o://246f9a10c2b802b34b674bf95f2a55e8e76593d2b57875645c6185a552884984" gracePeriod=2 Jan 23 16:31:04 crc kubenswrapper[4718]: I0123 16:31:04.890350 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69ccdfb68b-l4gxm" podStartSLOduration=2.890320017 podStartE2EDuration="2.890320017s" podCreationTimestamp="2026-01-23 16:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:31:04.881307243 +0000 UTC m=+866.028549254" watchObservedRunningTime="2026-01-23 16:31:04.890320017 +0000 UTC m=+866.037562008" Jan 23 16:31:05 crc kubenswrapper[4718]: I0123 16:31:05.868891 4718 generic.go:334] "Generic (PLEG): container finished" podID="2422207f-15e3-46a5-8043-1fbb60283450" containerID="246f9a10c2b802b34b674bf95f2a55e8e76593d2b57875645c6185a552884984" exitCode=0 Jan 23 16:31:05 crc kubenswrapper[4718]: I0123 16:31:05.869022 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjj2" event={"ID":"2422207f-15e3-46a5-8043-1fbb60283450","Type":"ContainerDied","Data":"246f9a10c2b802b34b674bf95f2a55e8e76593d2b57875645c6185a552884984"} Jan 23 16:31:06 crc kubenswrapper[4718]: I0123 16:31:06.769001 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvjj2" Jan 23 16:31:06 crc kubenswrapper[4718]: I0123 16:31:06.894442 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw8z8\" (UniqueName: \"kubernetes.io/projected/2422207f-15e3-46a5-8043-1fbb60283450-kube-api-access-sw8z8\") pod \"2422207f-15e3-46a5-8043-1fbb60283450\" (UID: \"2422207f-15e3-46a5-8043-1fbb60283450\") " Jan 23 16:31:06 crc kubenswrapper[4718]: I0123 16:31:06.895109 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2422207f-15e3-46a5-8043-1fbb60283450-utilities\") pod \"2422207f-15e3-46a5-8043-1fbb60283450\" (UID: \"2422207f-15e3-46a5-8043-1fbb60283450\") " Jan 23 16:31:06 crc kubenswrapper[4718]: I0123 16:31:06.895185 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2422207f-15e3-46a5-8043-1fbb60283450-catalog-content\") pod \"2422207f-15e3-46a5-8043-1fbb60283450\" (UID: \"2422207f-15e3-46a5-8043-1fbb60283450\") " Jan 23 16:31:06 crc kubenswrapper[4718]: I0123 16:31:06.901074 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2422207f-15e3-46a5-8043-1fbb60283450-utilities" (OuterVolumeSpecName: "utilities") pod "2422207f-15e3-46a5-8043-1fbb60283450" (UID: "2422207f-15e3-46a5-8043-1fbb60283450"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:31:06 crc kubenswrapper[4718]: I0123 16:31:06.917781 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjj2" event={"ID":"2422207f-15e3-46a5-8043-1fbb60283450","Type":"ContainerDied","Data":"219c2867767d888a574afaf2a79121421b41e32ce21df31864292b4360d14484"} Jan 23 16:31:06 crc kubenswrapper[4718]: I0123 16:31:06.917880 4718 scope.go:117] "RemoveContainer" containerID="246f9a10c2b802b34b674bf95f2a55e8e76593d2b57875645c6185a552884984" Jan 23 16:31:06 crc kubenswrapper[4718]: I0123 16:31:06.918302 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvjj2" Jan 23 16:31:06 crc kubenswrapper[4718]: I0123 16:31:06.923062 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2422207f-15e3-46a5-8043-1fbb60283450-kube-api-access-sw8z8" (OuterVolumeSpecName: "kube-api-access-sw8z8") pod "2422207f-15e3-46a5-8043-1fbb60283450" (UID: "2422207f-15e3-46a5-8043-1fbb60283450"). InnerVolumeSpecName "kube-api-access-sw8z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:31:06 crc kubenswrapper[4718]: I0123 16:31:06.970673 4718 scope.go:117] "RemoveContainer" containerID="74d5998325f5c9edb8d0c7088ad832c23d1d0ba8cf9a41eeebd38fd454fcd0df" Jan 23 16:31:07 crc kubenswrapper[4718]: I0123 16:31:07.004376 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw8z8\" (UniqueName: \"kubernetes.io/projected/2422207f-15e3-46a5-8043-1fbb60283450-kube-api-access-sw8z8\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:07 crc kubenswrapper[4718]: I0123 16:31:07.004403 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2422207f-15e3-46a5-8043-1fbb60283450-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:07 crc kubenswrapper[4718]: I0123 16:31:07.014376 4718 scope.go:117] "RemoveContainer" containerID="418785db6fac2c25ce5eedbca4e84c768af0ef46cf92f003b5ffae1ad585254c" Jan 23 16:31:07 crc kubenswrapper[4718]: I0123 16:31:07.077317 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2422207f-15e3-46a5-8043-1fbb60283450-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2422207f-15e3-46a5-8043-1fbb60283450" (UID: "2422207f-15e3-46a5-8043-1fbb60283450"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:31:07 crc kubenswrapper[4718]: I0123 16:31:07.107593 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2422207f-15e3-46a5-8043-1fbb60283450-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:07 crc kubenswrapper[4718]: I0123 16:31:07.135185 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pg4mx" Jan 23 16:31:07 crc kubenswrapper[4718]: I0123 16:31:07.135278 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pg4mx" Jan 23 16:31:07 crc kubenswrapper[4718]: I0123 16:31:07.197205 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pg4mx" Jan 23 16:31:07 crc kubenswrapper[4718]: I0123 16:31:07.247904 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zvjj2"] Jan 23 16:31:07 crc kubenswrapper[4718]: I0123 16:31:07.259103 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zvjj2"] Jan 23 16:31:07 crc kubenswrapper[4718]: I0123 16:31:07.933302 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-4r42t" event={"ID":"c9ada4d9-34eb-43fb-a0ba-09b879eab797","Type":"ContainerStarted","Data":"06484dd041851770587f266f04a5981c6e65299ff013ac11e9d44d4764234ffe"} Jan 23 16:31:07 crc kubenswrapper[4718]: I0123 16:31:07.937673 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hbmxh" event={"ID":"c66f413f-8a00-4526-b93f-4d739aec140c","Type":"ContainerStarted","Data":"5d7fc97eb9ea6141b67249b1c32bec9833e9303e9eef887efea1f96f24fbf983"} Jan 23 16:31:07 crc kubenswrapper[4718]: I0123 16:31:07.937820 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-hbmxh" Jan 23 16:31:07 crc kubenswrapper[4718]: I0123 16:31:07.939390 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hqbgx" event={"ID":"9d41c1ee-b304-42c0-a2e7-2fe83315a430","Type":"ContainerStarted","Data":"bf93a543ddf7b3f3dfbcaa8289abc57e17f97284ea328fa7ef29e97c8fdabf7d"} Jan 23 16:31:07 crc kubenswrapper[4718]: I0123 16:31:07.939499 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hqbgx" Jan 23 16:31:07 crc kubenswrapper[4718]: I0123 16:31:07.942153 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2h982" event={"ID":"4ff516ae-ef38-4eb8-9721-b5e809fa1a53","Type":"ContainerStarted","Data":"fd5b9e3d21ebc22718d94503ac3f151a60a22e4fdd2a05974b610f7fd81f9396"} Jan 23 16:31:07 crc kubenswrapper[4718]: I0123 16:31:07.962403 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-hbmxh" podStartSLOduration=2.150837307 podStartE2EDuration="5.96237867s" podCreationTimestamp="2026-01-23 16:31:02 +0000 UTC" firstStartedPulling="2026-01-23 16:31:02.919134996 +0000 UTC m=+864.066376987" lastFinishedPulling="2026-01-23 16:31:06.730676359 +0000 UTC m=+867.877918350" observedRunningTime="2026-01-23 16:31:07.955225656 +0000 UTC m=+869.102467667" watchObservedRunningTime="2026-01-23 16:31:07.96237867 +0000 UTC m=+869.109620661" Jan 23 16:31:07 crc kubenswrapper[4718]: I0123 16:31:07.984481 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2h982" podStartSLOduration=2.892099095 podStartE2EDuration="5.984452939s" podCreationTimestamp="2026-01-23 16:31:02 +0000 UTC" firstStartedPulling="2026-01-23 16:31:03.584150486 +0000 UTC m=+864.731392477" lastFinishedPulling="2026-01-23 16:31:06.67650433 +0000 UTC m=+867.823746321" observedRunningTime="2026-01-23 16:31:07.978387974 +0000 UTC m=+869.125629965" watchObservedRunningTime="2026-01-23 16:31:07.984452939 +0000 UTC m=+869.131694930" Jan 23 16:31:08 crc kubenswrapper[4718]: I0123 16:31:08.001349 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pg4mx" Jan 23 16:31:08 crc kubenswrapper[4718]: I0123 16:31:08.008934 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hqbgx" podStartSLOduration=2.955801663 podStartE2EDuration="6.008916612s" podCreationTimestamp="2026-01-23 16:31:02 +0000 UTC" firstStartedPulling="2026-01-23 16:31:03.789799964 +0000 UTC m=+864.937041955" lastFinishedPulling="2026-01-23 16:31:06.842914913 +0000 UTC m=+867.990156904" observedRunningTime="2026-01-23 16:31:07.99483956 +0000 UTC m=+869.142081551" watchObservedRunningTime="2026-01-23 16:31:08.008916612 +0000 UTC m=+869.156158603" Jan 23 16:31:09 crc kubenswrapper[4718]: I0123 16:31:09.162361 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2422207f-15e3-46a5-8043-1fbb60283450" path="/var/lib/kubelet/pods/2422207f-15e3-46a5-8043-1fbb60283450/volumes" Jan 23 16:31:10 crc kubenswrapper[4718]: I0123 16:31:10.431713 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg4mx"] Jan 23 16:31:10 crc kubenswrapper[4718]: I0123 16:31:10.432726 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pg4mx" podUID="a295317a-5807-4be4-8700-5de8b08e5975" containerName="registry-server" containerID="cri-o://26883940149ea9d607e83198a2f05bcbc25535297c68d9ed2a74bdaba158baa8" gracePeriod=2 Jan 23 16:31:10 crc kubenswrapper[4718]: I0123 16:31:10.972665 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-4r42t" event={"ID":"c9ada4d9-34eb-43fb-a0ba-09b879eab797","Type":"ContainerStarted","Data":"7dc0943e77c1c75923dc93a6928c2ae0ed31a5d0e8c4b57da464fa22a9d26eae"} Jan 23 16:31:10 crc kubenswrapper[4718]: I0123 16:31:10.976832 4718 generic.go:334] "Generic (PLEG): container finished" podID="a295317a-5807-4be4-8700-5de8b08e5975" containerID="26883940149ea9d607e83198a2f05bcbc25535297c68d9ed2a74bdaba158baa8" exitCode=0 Jan 23 16:31:10 crc kubenswrapper[4718]: I0123 16:31:10.976913 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg4mx" event={"ID":"a295317a-5807-4be4-8700-5de8b08e5975","Type":"ContainerDied","Data":"26883940149ea9d607e83198a2f05bcbc25535297c68d9ed2a74bdaba158baa8"} Jan 23 16:31:10 crc kubenswrapper[4718]: I0123 16:31:10.977041 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg4mx" event={"ID":"a295317a-5807-4be4-8700-5de8b08e5975","Type":"ContainerDied","Data":"8de3cf88550f126aebd2ccdad820c57ad336a2e3bd280ca248b6d2ffeb8e2012"} Jan 23 16:31:10 crc kubenswrapper[4718]: I0123 16:31:10.977072 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8de3cf88550f126aebd2ccdad820c57ad336a2e3bd280ca248b6d2ffeb8e2012" Jan 23 16:31:10 crc kubenswrapper[4718]: I0123 16:31:10.993077 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-4r42t" podStartSLOduration=2.64228992 podStartE2EDuration="8.993051361s" podCreationTimestamp="2026-01-23 16:31:02 +0000 UTC" firstStartedPulling="2026-01-23 16:31:03.470267697 +0000 UTC m=+864.617509688" lastFinishedPulling="2026-01-23 16:31:09.821029138 +0000 UTC m=+870.968271129" observedRunningTime="2026-01-23 16:31:10.990273015 +0000 UTC m=+872.137515036" watchObservedRunningTime="2026-01-23 16:31:10.993051361 +0000 UTC m=+872.140293352" Jan 23 16:31:11 crc kubenswrapper[4718]: I0123 16:31:11.042679 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg4mx" Jan 23 16:31:11 crc kubenswrapper[4718]: I0123 16:31:11.113792 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a295317a-5807-4be4-8700-5de8b08e5975-catalog-content\") pod \"a295317a-5807-4be4-8700-5de8b08e5975\" (UID: \"a295317a-5807-4be4-8700-5de8b08e5975\") " Jan 23 16:31:11 crc kubenswrapper[4718]: I0123 16:31:11.114076 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a295317a-5807-4be4-8700-5de8b08e5975-utilities\") pod \"a295317a-5807-4be4-8700-5de8b08e5975\" (UID: \"a295317a-5807-4be4-8700-5de8b08e5975\") " Jan 23 16:31:11 crc kubenswrapper[4718]: I0123 16:31:11.114259 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml5fh\" (UniqueName: \"kubernetes.io/projected/a295317a-5807-4be4-8700-5de8b08e5975-kube-api-access-ml5fh\") pod \"a295317a-5807-4be4-8700-5de8b08e5975\" (UID: \"a295317a-5807-4be4-8700-5de8b08e5975\") " Jan 23 16:31:11 crc kubenswrapper[4718]: I0123 16:31:11.115396 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a295317a-5807-4be4-8700-5de8b08e5975-utilities" (OuterVolumeSpecName: "utilities") pod "a295317a-5807-4be4-8700-5de8b08e5975" (UID: "a295317a-5807-4be4-8700-5de8b08e5975"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:31:11 crc kubenswrapper[4718]: I0123 16:31:11.124937 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a295317a-5807-4be4-8700-5de8b08e5975-kube-api-access-ml5fh" (OuterVolumeSpecName: "kube-api-access-ml5fh") pod "a295317a-5807-4be4-8700-5de8b08e5975" (UID: "a295317a-5807-4be4-8700-5de8b08e5975"). InnerVolumeSpecName "kube-api-access-ml5fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:31:11 crc kubenswrapper[4718]: I0123 16:31:11.132970 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a295317a-5807-4be4-8700-5de8b08e5975-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a295317a-5807-4be4-8700-5de8b08e5975" (UID: "a295317a-5807-4be4-8700-5de8b08e5975"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:31:11 crc kubenswrapper[4718]: I0123 16:31:11.217064 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a295317a-5807-4be4-8700-5de8b08e5975-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:11 crc kubenswrapper[4718]: I0123 16:31:11.217115 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a295317a-5807-4be4-8700-5de8b08e5975-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:11 crc kubenswrapper[4718]: I0123 16:31:11.217128 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml5fh\" (UniqueName: \"kubernetes.io/projected/a295317a-5807-4be4-8700-5de8b08e5975-kube-api-access-ml5fh\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:11 crc kubenswrapper[4718]: I0123 16:31:11.987069 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg4mx" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.013539 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg4mx"] Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.021383 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg4mx"] Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.640568 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zs5fk"] Jan 23 16:31:12 crc kubenswrapper[4718]: E0123 16:31:12.641572 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a295317a-5807-4be4-8700-5de8b08e5975" containerName="extract-utilities" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.641596 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a295317a-5807-4be4-8700-5de8b08e5975" containerName="extract-utilities" Jan 23 16:31:12 crc kubenswrapper[4718]: E0123 16:31:12.641618 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2422207f-15e3-46a5-8043-1fbb60283450" containerName="extract-utilities" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.641938 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2422207f-15e3-46a5-8043-1fbb60283450" containerName="extract-utilities" Jan 23 16:31:12 crc kubenswrapper[4718]: E0123 16:31:12.641963 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a295317a-5807-4be4-8700-5de8b08e5975" containerName="extract-content" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.641974 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a295317a-5807-4be4-8700-5de8b08e5975" containerName="extract-content" Jan 23 16:31:12 crc kubenswrapper[4718]: E0123 16:31:12.642007 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a295317a-5807-4be4-8700-5de8b08e5975" containerName="registry-server" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.642016 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a295317a-5807-4be4-8700-5de8b08e5975" containerName="registry-server" Jan 23 16:31:12 crc kubenswrapper[4718]: E0123 16:31:12.642040 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2422207f-15e3-46a5-8043-1fbb60283450" containerName="extract-content" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.642048 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2422207f-15e3-46a5-8043-1fbb60283450" containerName="extract-content" Jan 23 16:31:12 crc kubenswrapper[4718]: E0123 16:31:12.642059 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2422207f-15e3-46a5-8043-1fbb60283450" containerName="registry-server" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.642086 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2422207f-15e3-46a5-8043-1fbb60283450" containerName="registry-server" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.642301 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a295317a-5807-4be4-8700-5de8b08e5975" containerName="registry-server" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.642348 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2422207f-15e3-46a5-8043-1fbb60283450" containerName="registry-server" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.643710 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zs5fk" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.665199 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zs5fk"] Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.746920 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ecc2d21-e511-4323-a21f-48a78800a566-utilities\") pod \"community-operators-zs5fk\" (UID: \"3ecc2d21-e511-4323-a21f-48a78800a566\") " pod="openshift-marketplace/community-operators-zs5fk" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.747099 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ecc2d21-e511-4323-a21f-48a78800a566-catalog-content\") pod \"community-operators-zs5fk\" (UID: \"3ecc2d21-e511-4323-a21f-48a78800a566\") " pod="openshift-marketplace/community-operators-zs5fk" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.747172 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fblnx\" (UniqueName: \"kubernetes.io/projected/3ecc2d21-e511-4323-a21f-48a78800a566-kube-api-access-fblnx\") pod \"community-operators-zs5fk\" (UID: \"3ecc2d21-e511-4323-a21f-48a78800a566\") " pod="openshift-marketplace/community-operators-zs5fk" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.848816 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ecc2d21-e511-4323-a21f-48a78800a566-catalog-content\") pod \"community-operators-zs5fk\" (UID: \"3ecc2d21-e511-4323-a21f-48a78800a566\") " pod="openshift-marketplace/community-operators-zs5fk" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.848889 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fblnx\" (UniqueName: \"kubernetes.io/projected/3ecc2d21-e511-4323-a21f-48a78800a566-kube-api-access-fblnx\") pod \"community-operators-zs5fk\" (UID: \"3ecc2d21-e511-4323-a21f-48a78800a566\") " pod="openshift-marketplace/community-operators-zs5fk" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.848928 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ecc2d21-e511-4323-a21f-48a78800a566-utilities\") pod \"community-operators-zs5fk\" (UID: \"3ecc2d21-e511-4323-a21f-48a78800a566\") " pod="openshift-marketplace/community-operators-zs5fk" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.849398 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ecc2d21-e511-4323-a21f-48a78800a566-utilities\") pod \"community-operators-zs5fk\" (UID: \"3ecc2d21-e511-4323-a21f-48a78800a566\") " pod="openshift-marketplace/community-operators-zs5fk" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.849669 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ecc2d21-e511-4323-a21f-48a78800a566-catalog-content\") pod \"community-operators-zs5fk\" (UID: \"3ecc2d21-e511-4323-a21f-48a78800a566\") " pod="openshift-marketplace/community-operators-zs5fk" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.876913 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fblnx\" (UniqueName: \"kubernetes.io/projected/3ecc2d21-e511-4323-a21f-48a78800a566-kube-api-access-fblnx\") pod \"community-operators-zs5fk\" (UID: \"3ecc2d21-e511-4323-a21f-48a78800a566\") " pod="openshift-marketplace/community-operators-zs5fk" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.918559 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-hbmxh" Jan 23 16:31:12 crc kubenswrapper[4718]: I0123 16:31:12.994205 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zs5fk" Jan 23 16:31:13 crc kubenswrapper[4718]: I0123 16:31:13.151031 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a295317a-5807-4be4-8700-5de8b08e5975" path="/var/lib/kubelet/pods/a295317a-5807-4be4-8700-5de8b08e5975/volumes" Jan 23 16:31:13 crc kubenswrapper[4718]: I0123 16:31:13.337172 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:13 crc kubenswrapper[4718]: I0123 16:31:13.337568 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:13 crc kubenswrapper[4718]: I0123 16:31:13.346947 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:13 crc kubenswrapper[4718]: I0123 16:31:13.577540 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zs5fk"] Jan 23 16:31:14 crc kubenswrapper[4718]: I0123 16:31:14.028338 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zs5fk" event={"ID":"3ecc2d21-e511-4323-a21f-48a78800a566","Type":"ContainerStarted","Data":"cfd9c9ecd9e6996c59703bd11053a3374fde2d118e4651ad63d194a4f4c77b11"} Jan 23 16:31:14 crc kubenswrapper[4718]: I0123 16:31:14.037959 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:31:14 crc kubenswrapper[4718]: I0123 16:31:14.125024 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5c466d7564-7dkhr"] Jan 23 16:31:15 crc kubenswrapper[4718]: I0123 16:31:15.036897 4718 generic.go:334] "Generic (PLEG): container finished" podID="3ecc2d21-e511-4323-a21f-48a78800a566" containerID="558c40ce29c7c78a38d83b514caa9e493dd9d12d1fd9ba5015219f95c121cd7d" exitCode=0 Jan 23 16:31:15 crc kubenswrapper[4718]: I0123 16:31:15.037001 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zs5fk" event={"ID":"3ecc2d21-e511-4323-a21f-48a78800a566","Type":"ContainerDied","Data":"558c40ce29c7c78a38d83b514caa9e493dd9d12d1fd9ba5015219f95c121cd7d"} Jan 23 16:31:15 crc kubenswrapper[4718]: I0123 16:31:15.039872 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 16:31:17 crc kubenswrapper[4718]: I0123 16:31:17.079110 4718 generic.go:334] "Generic (PLEG): container finished" podID="3ecc2d21-e511-4323-a21f-48a78800a566" containerID="49d676f50b8888c8c3d4bd4c6da5d97c67d02be2fbc35bc156e97c1bec55c0b4" exitCode=0 Jan 23 16:31:17 crc kubenswrapper[4718]: I0123 16:31:17.079325 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zs5fk" event={"ID":"3ecc2d21-e511-4323-a21f-48a78800a566","Type":"ContainerDied","Data":"49d676f50b8888c8c3d4bd4c6da5d97c67d02be2fbc35bc156e97c1bec55c0b4"} Jan 23 16:31:18 crc kubenswrapper[4718]: I0123 16:31:18.093025 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zs5fk" event={"ID":"3ecc2d21-e511-4323-a21f-48a78800a566","Type":"ContainerStarted","Data":"d3f553092e6ee3a76e9135cc9f00a09722b71a2690242de30dbb80e22eaf6bbe"} Jan 23 16:31:18 crc kubenswrapper[4718]: I0123 16:31:18.120242 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zs5fk" podStartSLOduration=3.64389321 podStartE2EDuration="6.120212153s" podCreationTimestamp="2026-01-23 16:31:12 +0000 UTC" firstStartedPulling="2026-01-23 16:31:15.039604227 +0000 UTC m=+876.186846218" lastFinishedPulling="2026-01-23 16:31:17.51592316 +0000 UTC m=+878.663165161" observedRunningTime="2026-01-23 16:31:18.115272088 +0000 UTC m=+879.262514089" watchObservedRunningTime="2026-01-23 16:31:18.120212153 +0000 UTC m=+879.267454174" Jan 23 16:31:20 crc kubenswrapper[4718]: I0123 16:31:20.843131 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5v9nt"] Jan 23 16:31:20 crc kubenswrapper[4718]: I0123 16:31:20.845887 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5v9nt" Jan 23 16:31:20 crc kubenswrapper[4718]: I0123 16:31:20.857361 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5v9nt"] Jan 23 16:31:20 crc kubenswrapper[4718]: I0123 16:31:20.908837 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwh5q\" (UniqueName: \"kubernetes.io/projected/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac-kube-api-access-lwh5q\") pod \"certified-operators-5v9nt\" (UID: \"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac\") " pod="openshift-marketplace/certified-operators-5v9nt" Jan 23 16:31:20 crc kubenswrapper[4718]: I0123 16:31:20.908959 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac-utilities\") pod \"certified-operators-5v9nt\" (UID: \"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac\") " pod="openshift-marketplace/certified-operators-5v9nt" Jan 23 16:31:20 crc kubenswrapper[4718]: I0123 16:31:20.909216 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac-catalog-content\") pod \"certified-operators-5v9nt\" (UID: \"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac\") " pod="openshift-marketplace/certified-operators-5v9nt" Jan 23 16:31:21 crc kubenswrapper[4718]: I0123 16:31:21.011278 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwh5q\" (UniqueName: \"kubernetes.io/projected/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac-kube-api-access-lwh5q\") pod \"certified-operators-5v9nt\" (UID: \"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac\") " pod="openshift-marketplace/certified-operators-5v9nt" Jan 23 16:31:21 crc kubenswrapper[4718]: I0123 16:31:21.011354 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac-utilities\") pod \"certified-operators-5v9nt\" (UID: \"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac\") " pod="openshift-marketplace/certified-operators-5v9nt" Jan 23 16:31:21 crc kubenswrapper[4718]: I0123 16:31:21.011410 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac-catalog-content\") pod \"certified-operators-5v9nt\" (UID: \"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac\") " pod="openshift-marketplace/certified-operators-5v9nt" Jan 23 16:31:21 crc kubenswrapper[4718]: I0123 16:31:21.011991 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac-utilities\") pod \"certified-operators-5v9nt\" (UID: \"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac\") " pod="openshift-marketplace/certified-operators-5v9nt" Jan 23 16:31:21 crc kubenswrapper[4718]: I0123 16:31:21.012019 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac-catalog-content\") pod \"certified-operators-5v9nt\" (UID: \"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac\") " pod="openshift-marketplace/certified-operators-5v9nt" Jan 23 16:31:21 crc kubenswrapper[4718]: I0123 16:31:21.031175 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwh5q\" (UniqueName: \"kubernetes.io/projected/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac-kube-api-access-lwh5q\") pod \"certified-operators-5v9nt\" (UID: \"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac\") " pod="openshift-marketplace/certified-operators-5v9nt" Jan 23 16:31:21 crc kubenswrapper[4718]: I0123 16:31:21.182478 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5v9nt" Jan 23 16:31:22 crc kubenswrapper[4718]: I0123 16:31:21.679298 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5v9nt"] Jan 23 16:31:22 crc kubenswrapper[4718]: I0123 16:31:22.123422 4718 generic.go:334] "Generic (PLEG): container finished" podID="d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac" containerID="5c467dc1fbc6f81d40cc58d23d06f9d1cb927dfc486cfc392349a74d69d6db19" exitCode=0 Jan 23 16:31:22 crc kubenswrapper[4718]: I0123 16:31:22.123531 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v9nt" event={"ID":"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac","Type":"ContainerDied","Data":"5c467dc1fbc6f81d40cc58d23d06f9d1cb927dfc486cfc392349a74d69d6db19"} Jan 23 16:31:22 crc kubenswrapper[4718]: I0123 16:31:22.123893 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v9nt" event={"ID":"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac","Type":"ContainerStarted","Data":"611602799c829deba2951af444357b25dbea77368e7829c8808393b5757c618b"} Jan 23 16:31:22 crc kubenswrapper[4718]: I0123 16:31:22.994660 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zs5fk" Jan 23 16:31:22 crc kubenswrapper[4718]: I0123 16:31:22.995392 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zs5fk" Jan 23 16:31:23 crc kubenswrapper[4718]: I0123 16:31:23.061348 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zs5fk" Jan 23 16:31:23 crc kubenswrapper[4718]: I0123 16:31:23.135999 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v9nt" event={"ID":"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac","Type":"ContainerStarted","Data":"0b37f1e91200e9e8efeb739e6fa5ba7042c3a00ef8ddb8a382f93eb09b6b3e46"} Jan 23 16:31:23 crc kubenswrapper[4718]: I0123 16:31:23.213017 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zs5fk" Jan 23 16:31:23 crc kubenswrapper[4718]: I0123 16:31:23.505369 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hqbgx" Jan 23 16:31:24 crc kubenswrapper[4718]: I0123 16:31:24.151791 4718 generic.go:334] "Generic (PLEG): container finished" podID="d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac" containerID="0b37f1e91200e9e8efeb739e6fa5ba7042c3a00ef8ddb8a382f93eb09b6b3e46" exitCode=0 Jan 23 16:31:24 crc kubenswrapper[4718]: I0123 16:31:24.151942 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v9nt" event={"ID":"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac","Type":"ContainerDied","Data":"0b37f1e91200e9e8efeb739e6fa5ba7042c3a00ef8ddb8a382f93eb09b6b3e46"} Jan 23 16:31:25 crc kubenswrapper[4718]: I0123 16:31:25.165688 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v9nt" event={"ID":"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac","Type":"ContainerStarted","Data":"2ec3cd9ef2f54fd3031a36bad5d23b75152044ed7f7202604dadf7187d92536d"} Jan 23 16:31:25 crc kubenswrapper[4718]: I0123 16:31:25.198229 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5v9nt" podStartSLOduration=2.781412144 podStartE2EDuration="5.19819394s" podCreationTimestamp="2026-01-23 16:31:20 +0000 UTC" firstStartedPulling="2026-01-23 16:31:22.126162239 +0000 UTC m=+883.273404270" lastFinishedPulling="2026-01-23 16:31:24.542944065 +0000 UTC m=+885.690186066" observedRunningTime="2026-01-23 16:31:25.190410559 +0000 UTC m=+886.337652570" watchObservedRunningTime="2026-01-23 16:31:25.19819394 +0000 UTC m=+886.345435961" Jan 23 16:31:25 crc kubenswrapper[4718]: I0123 16:31:25.431368 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zs5fk"] Jan 23 16:31:25 crc kubenswrapper[4718]: I0123 16:31:25.431741 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zs5fk" podUID="3ecc2d21-e511-4323-a21f-48a78800a566" containerName="registry-server" containerID="cri-o://d3f553092e6ee3a76e9135cc9f00a09722b71a2690242de30dbb80e22eaf6bbe" gracePeriod=2 Jan 23 16:31:25 crc kubenswrapper[4718]: I0123 16:31:25.922061 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zs5fk" Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.017065 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fblnx\" (UniqueName: \"kubernetes.io/projected/3ecc2d21-e511-4323-a21f-48a78800a566-kube-api-access-fblnx\") pod \"3ecc2d21-e511-4323-a21f-48a78800a566\" (UID: \"3ecc2d21-e511-4323-a21f-48a78800a566\") " Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.017127 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ecc2d21-e511-4323-a21f-48a78800a566-utilities\") pod \"3ecc2d21-e511-4323-a21f-48a78800a566\" (UID: \"3ecc2d21-e511-4323-a21f-48a78800a566\") " Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.017188 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ecc2d21-e511-4323-a21f-48a78800a566-catalog-content\") pod \"3ecc2d21-e511-4323-a21f-48a78800a566\" (UID: \"3ecc2d21-e511-4323-a21f-48a78800a566\") " Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.017902 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ecc2d21-e511-4323-a21f-48a78800a566-utilities" (OuterVolumeSpecName: "utilities") pod "3ecc2d21-e511-4323-a21f-48a78800a566" (UID: "3ecc2d21-e511-4323-a21f-48a78800a566"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.023099 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ecc2d21-e511-4323-a21f-48a78800a566-kube-api-access-fblnx" (OuterVolumeSpecName: "kube-api-access-fblnx") pod "3ecc2d21-e511-4323-a21f-48a78800a566" (UID: "3ecc2d21-e511-4323-a21f-48a78800a566"). InnerVolumeSpecName "kube-api-access-fblnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.079438 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ecc2d21-e511-4323-a21f-48a78800a566-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ecc2d21-e511-4323-a21f-48a78800a566" (UID: "3ecc2d21-e511-4323-a21f-48a78800a566"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.119854 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ecc2d21-e511-4323-a21f-48a78800a566-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.120139 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fblnx\" (UniqueName: \"kubernetes.io/projected/3ecc2d21-e511-4323-a21f-48a78800a566-kube-api-access-fblnx\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.120301 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ecc2d21-e511-4323-a21f-48a78800a566-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.179775 4718 generic.go:334] "Generic (PLEG): container finished" podID="3ecc2d21-e511-4323-a21f-48a78800a566" containerID="d3f553092e6ee3a76e9135cc9f00a09722b71a2690242de30dbb80e22eaf6bbe" exitCode=0 Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.179924 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zs5fk" event={"ID":"3ecc2d21-e511-4323-a21f-48a78800a566","Type":"ContainerDied","Data":"d3f553092e6ee3a76e9135cc9f00a09722b71a2690242de30dbb80e22eaf6bbe"} Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.179968 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zs5fk" event={"ID":"3ecc2d21-e511-4323-a21f-48a78800a566","Type":"ContainerDied","Data":"cfd9c9ecd9e6996c59703bd11053a3374fde2d118e4651ad63d194a4f4c77b11"} Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.179984 4718 scope.go:117] "RemoveContainer" containerID="d3f553092e6ee3a76e9135cc9f00a09722b71a2690242de30dbb80e22eaf6bbe" Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.180499 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zs5fk" Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.211989 4718 scope.go:117] "RemoveContainer" containerID="49d676f50b8888c8c3d4bd4c6da5d97c67d02be2fbc35bc156e97c1bec55c0b4" Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.230814 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zs5fk"] Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.239542 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zs5fk"] Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.251769 4718 scope.go:117] "RemoveContainer" containerID="558c40ce29c7c78a38d83b514caa9e493dd9d12d1fd9ba5015219f95c121cd7d" Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.275009 4718 scope.go:117] "RemoveContainer" containerID="d3f553092e6ee3a76e9135cc9f00a09722b71a2690242de30dbb80e22eaf6bbe" Jan 23 16:31:26 crc kubenswrapper[4718]: E0123 16:31:26.275999 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ecc2d21_e511_4323_a21f_48a78800a566.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ecc2d21_e511_4323_a21f_48a78800a566.slice/crio-cfd9c9ecd9e6996c59703bd11053a3374fde2d118e4651ad63d194a4f4c77b11\": RecentStats: unable to find data in memory cache]" Jan 23 16:31:26 crc kubenswrapper[4718]: E0123 16:31:26.276485 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3f553092e6ee3a76e9135cc9f00a09722b71a2690242de30dbb80e22eaf6bbe\": container with ID starting with d3f553092e6ee3a76e9135cc9f00a09722b71a2690242de30dbb80e22eaf6bbe not found: ID does not exist" containerID="d3f553092e6ee3a76e9135cc9f00a09722b71a2690242de30dbb80e22eaf6bbe" Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.276521 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3f553092e6ee3a76e9135cc9f00a09722b71a2690242de30dbb80e22eaf6bbe"} err="failed to get container status \"d3f553092e6ee3a76e9135cc9f00a09722b71a2690242de30dbb80e22eaf6bbe\": rpc error: code = NotFound desc = could not find container \"d3f553092e6ee3a76e9135cc9f00a09722b71a2690242de30dbb80e22eaf6bbe\": container with ID starting with d3f553092e6ee3a76e9135cc9f00a09722b71a2690242de30dbb80e22eaf6bbe not found: ID does not exist" Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.276551 4718 scope.go:117] "RemoveContainer" containerID="49d676f50b8888c8c3d4bd4c6da5d97c67d02be2fbc35bc156e97c1bec55c0b4" Jan 23 16:31:26 crc kubenswrapper[4718]: E0123 16:31:26.276964 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49d676f50b8888c8c3d4bd4c6da5d97c67d02be2fbc35bc156e97c1bec55c0b4\": container with ID starting with 49d676f50b8888c8c3d4bd4c6da5d97c67d02be2fbc35bc156e97c1bec55c0b4 not found: ID does not exist" containerID="49d676f50b8888c8c3d4bd4c6da5d97c67d02be2fbc35bc156e97c1bec55c0b4" Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.277018 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49d676f50b8888c8c3d4bd4c6da5d97c67d02be2fbc35bc156e97c1bec55c0b4"} err="failed to get container status \"49d676f50b8888c8c3d4bd4c6da5d97c67d02be2fbc35bc156e97c1bec55c0b4\": rpc error: code = NotFound desc = could not find container \"49d676f50b8888c8c3d4bd4c6da5d97c67d02be2fbc35bc156e97c1bec55c0b4\": container with ID starting with 49d676f50b8888c8c3d4bd4c6da5d97c67d02be2fbc35bc156e97c1bec55c0b4 not found: ID does not exist" Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.277055 4718 scope.go:117] "RemoveContainer" containerID="558c40ce29c7c78a38d83b514caa9e493dd9d12d1fd9ba5015219f95c121cd7d" Jan 23 16:31:26 crc kubenswrapper[4718]: E0123 16:31:26.277354 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"558c40ce29c7c78a38d83b514caa9e493dd9d12d1fd9ba5015219f95c121cd7d\": container with ID starting with 558c40ce29c7c78a38d83b514caa9e493dd9d12d1fd9ba5015219f95c121cd7d not found: ID does not exist" containerID="558c40ce29c7c78a38d83b514caa9e493dd9d12d1fd9ba5015219f95c121cd7d" Jan 23 16:31:26 crc kubenswrapper[4718]: I0123 16:31:26.277386 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"558c40ce29c7c78a38d83b514caa9e493dd9d12d1fd9ba5015219f95c121cd7d"} err="failed to get container status \"558c40ce29c7c78a38d83b514caa9e493dd9d12d1fd9ba5015219f95c121cd7d\": rpc error: code = NotFound desc = could not find container \"558c40ce29c7c78a38d83b514caa9e493dd9d12d1fd9ba5015219f95c121cd7d\": container with ID starting with 558c40ce29c7c78a38d83b514caa9e493dd9d12d1fd9ba5015219f95c121cd7d not found: ID does not exist" Jan 23 16:31:27 crc kubenswrapper[4718]: I0123 16:31:27.158528 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ecc2d21-e511-4323-a21f-48a78800a566" path="/var/lib/kubelet/pods/3ecc2d21-e511-4323-a21f-48a78800a566/volumes" Jan 23 16:31:31 crc kubenswrapper[4718]: I0123 16:31:31.182915 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5v9nt" Jan 23 16:31:31 crc kubenswrapper[4718]: I0123 16:31:31.183400 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5v9nt" Jan 23 16:31:31 crc kubenswrapper[4718]: I0123 16:31:31.239370 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5v9nt" Jan 23 16:31:31 crc kubenswrapper[4718]: I0123 16:31:31.310068 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5v9nt" Jan 23 16:31:32 crc kubenswrapper[4718]: I0123 16:31:32.228945 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5v9nt"] Jan 23 16:31:33 crc kubenswrapper[4718]: I0123 16:31:33.271744 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5v9nt" podUID="d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac" containerName="registry-server" containerID="cri-o://2ec3cd9ef2f54fd3031a36bad5d23b75152044ed7f7202604dadf7187d92536d" gracePeriod=2 Jan 23 16:31:33 crc kubenswrapper[4718]: I0123 16:31:33.719874 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5v9nt" Jan 23 16:31:33 crc kubenswrapper[4718]: I0123 16:31:33.833056 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac-catalog-content\") pod \"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac\" (UID: \"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac\") " Jan 23 16:31:33 crc kubenswrapper[4718]: I0123 16:31:33.833615 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac-utilities\") pod \"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac\" (UID: \"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac\") " Jan 23 16:31:33 crc kubenswrapper[4718]: I0123 16:31:33.833814 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwh5q\" (UniqueName: \"kubernetes.io/projected/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac-kube-api-access-lwh5q\") pod \"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac\" (UID: \"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac\") " Jan 23 16:31:33 crc kubenswrapper[4718]: I0123 16:31:33.834354 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac-utilities" (OuterVolumeSpecName: "utilities") pod "d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac" (UID: "d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:31:33 crc kubenswrapper[4718]: I0123 16:31:33.834779 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:33 crc kubenswrapper[4718]: I0123 16:31:33.841167 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac-kube-api-access-lwh5q" (OuterVolumeSpecName: "kube-api-access-lwh5q") pod "d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac" (UID: "d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac"). InnerVolumeSpecName "kube-api-access-lwh5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:31:33 crc kubenswrapper[4718]: I0123 16:31:33.908247 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac" (UID: "d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:31:33 crc kubenswrapper[4718]: I0123 16:31:33.936212 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwh5q\" (UniqueName: \"kubernetes.io/projected/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac-kube-api-access-lwh5q\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:33 crc kubenswrapper[4718]: I0123 16:31:33.936255 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:34 crc kubenswrapper[4718]: I0123 16:31:34.285862 4718 generic.go:334] "Generic (PLEG): container finished" podID="d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac" containerID="2ec3cd9ef2f54fd3031a36bad5d23b75152044ed7f7202604dadf7187d92536d" exitCode=0 Jan 23 16:31:34 crc kubenswrapper[4718]: I0123 16:31:34.285942 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v9nt" event={"ID":"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac","Type":"ContainerDied","Data":"2ec3cd9ef2f54fd3031a36bad5d23b75152044ed7f7202604dadf7187d92536d"} Jan 23 16:31:34 crc kubenswrapper[4718]: I0123 16:31:34.286022 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v9nt" event={"ID":"d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac","Type":"ContainerDied","Data":"611602799c829deba2951af444357b25dbea77368e7829c8808393b5757c618b"} Jan 23 16:31:34 crc kubenswrapper[4718]: I0123 16:31:34.286046 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5v9nt" Jan 23 16:31:34 crc kubenswrapper[4718]: I0123 16:31:34.286057 4718 scope.go:117] "RemoveContainer" containerID="2ec3cd9ef2f54fd3031a36bad5d23b75152044ed7f7202604dadf7187d92536d" Jan 23 16:31:34 crc kubenswrapper[4718]: I0123 16:31:34.322059 4718 scope.go:117] "RemoveContainer" containerID="0b37f1e91200e9e8efeb739e6fa5ba7042c3a00ef8ddb8a382f93eb09b6b3e46" Jan 23 16:31:34 crc kubenswrapper[4718]: I0123 16:31:34.346272 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5v9nt"] Jan 23 16:31:34 crc kubenswrapper[4718]: I0123 16:31:34.352721 4718 scope.go:117] "RemoveContainer" containerID="5c467dc1fbc6f81d40cc58d23d06f9d1cb927dfc486cfc392349a74d69d6db19" Jan 23 16:31:34 crc kubenswrapper[4718]: I0123 16:31:34.357604 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5v9nt"] Jan 23 16:31:34 crc kubenswrapper[4718]: I0123 16:31:34.394554 4718 scope.go:117] "RemoveContainer" containerID="2ec3cd9ef2f54fd3031a36bad5d23b75152044ed7f7202604dadf7187d92536d" Jan 23 16:31:34 crc kubenswrapper[4718]: E0123 16:31:34.395227 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ec3cd9ef2f54fd3031a36bad5d23b75152044ed7f7202604dadf7187d92536d\": container with ID starting with 2ec3cd9ef2f54fd3031a36bad5d23b75152044ed7f7202604dadf7187d92536d not found: ID does not exist" containerID="2ec3cd9ef2f54fd3031a36bad5d23b75152044ed7f7202604dadf7187d92536d" Jan 23 16:31:34 crc kubenswrapper[4718]: I0123 16:31:34.395378 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ec3cd9ef2f54fd3031a36bad5d23b75152044ed7f7202604dadf7187d92536d"} err="failed to get container status \"2ec3cd9ef2f54fd3031a36bad5d23b75152044ed7f7202604dadf7187d92536d\": rpc error: code = NotFound desc = could not find container \"2ec3cd9ef2f54fd3031a36bad5d23b75152044ed7f7202604dadf7187d92536d\": container with ID starting with 2ec3cd9ef2f54fd3031a36bad5d23b75152044ed7f7202604dadf7187d92536d not found: ID does not exist" Jan 23 16:31:34 crc kubenswrapper[4718]: I0123 16:31:34.395509 4718 scope.go:117] "RemoveContainer" containerID="0b37f1e91200e9e8efeb739e6fa5ba7042c3a00ef8ddb8a382f93eb09b6b3e46" Jan 23 16:31:34 crc kubenswrapper[4718]: E0123 16:31:34.396364 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b37f1e91200e9e8efeb739e6fa5ba7042c3a00ef8ddb8a382f93eb09b6b3e46\": container with ID starting with 0b37f1e91200e9e8efeb739e6fa5ba7042c3a00ef8ddb8a382f93eb09b6b3e46 not found: ID does not exist" containerID="0b37f1e91200e9e8efeb739e6fa5ba7042c3a00ef8ddb8a382f93eb09b6b3e46" Jan 23 16:31:34 crc kubenswrapper[4718]: I0123 16:31:34.396434 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b37f1e91200e9e8efeb739e6fa5ba7042c3a00ef8ddb8a382f93eb09b6b3e46"} err="failed to get container status \"0b37f1e91200e9e8efeb739e6fa5ba7042c3a00ef8ddb8a382f93eb09b6b3e46\": rpc error: code = NotFound desc = could not find container \"0b37f1e91200e9e8efeb739e6fa5ba7042c3a00ef8ddb8a382f93eb09b6b3e46\": container with ID starting with 0b37f1e91200e9e8efeb739e6fa5ba7042c3a00ef8ddb8a382f93eb09b6b3e46 not found: ID does not exist" Jan 23 16:31:34 crc kubenswrapper[4718]: I0123 16:31:34.396483 4718 scope.go:117] "RemoveContainer" containerID="5c467dc1fbc6f81d40cc58d23d06f9d1cb927dfc486cfc392349a74d69d6db19" Jan 23 16:31:34 crc kubenswrapper[4718]: E0123 16:31:34.396908 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c467dc1fbc6f81d40cc58d23d06f9d1cb927dfc486cfc392349a74d69d6db19\": container with ID starting with 5c467dc1fbc6f81d40cc58d23d06f9d1cb927dfc486cfc392349a74d69d6db19 not found: ID does not exist" containerID="5c467dc1fbc6f81d40cc58d23d06f9d1cb927dfc486cfc392349a74d69d6db19" Jan 23 16:31:34 crc kubenswrapper[4718]: I0123 16:31:34.397030 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c467dc1fbc6f81d40cc58d23d06f9d1cb927dfc486cfc392349a74d69d6db19"} err="failed to get container status \"5c467dc1fbc6f81d40cc58d23d06f9d1cb927dfc486cfc392349a74d69d6db19\": rpc error: code = NotFound desc = could not find container \"5c467dc1fbc6f81d40cc58d23d06f9d1cb927dfc486cfc392349a74d69d6db19\": container with ID starting with 5c467dc1fbc6f81d40cc58d23d06f9d1cb927dfc486cfc392349a74d69d6db19 not found: ID does not exist" Jan 23 16:31:35 crc kubenswrapper[4718]: I0123 16:31:35.151562 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac" path="/var/lib/kubelet/pods/d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac/volumes" Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.183506 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5c466d7564-7dkhr" podUID="e61d3adb-1166-4fe7-b38d-efbbf01446ea" containerName="console" containerID="cri-o://67b17f6cb013b001ee0896b5d0004543934916a49a6827d678cf260828a6b6f4" gracePeriod=15 Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.357688 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5c466d7564-7dkhr_e61d3adb-1166-4fe7-b38d-efbbf01446ea/console/0.log" Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.358208 4718 generic.go:334] "Generic (PLEG): container finished" podID="e61d3adb-1166-4fe7-b38d-efbbf01446ea" containerID="67b17f6cb013b001ee0896b5d0004543934916a49a6827d678cf260828a6b6f4" exitCode=2 Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.358287 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c466d7564-7dkhr" event={"ID":"e61d3adb-1166-4fe7-b38d-efbbf01446ea","Type":"ContainerDied","Data":"67b17f6cb013b001ee0896b5d0004543934916a49a6827d678cf260828a6b6f4"} Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.676455 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5c466d7564-7dkhr_e61d3adb-1166-4fe7-b38d-efbbf01446ea/console/0.log" Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.677425 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.772059 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-service-ca\") pod \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.773225 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-service-ca" (OuterVolumeSpecName: "service-ca") pod "e61d3adb-1166-4fe7-b38d-efbbf01446ea" (UID: "e61d3adb-1166-4fe7-b38d-efbbf01446ea"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.773792 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e61d3adb-1166-4fe7-b38d-efbbf01446ea-console-oauth-config\") pod \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.773889 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-console-config\") pod \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.774079 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-trusted-ca-bundle\") pod \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.774212 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e61d3adb-1166-4fe7-b38d-efbbf01446ea-console-serving-cert\") pod \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.774268 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z25g\" (UniqueName: \"kubernetes.io/projected/e61d3adb-1166-4fe7-b38d-efbbf01446ea-kube-api-access-8z25g\") pod \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.774333 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-oauth-serving-cert\") pod \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\" (UID: \"e61d3adb-1166-4fe7-b38d-efbbf01446ea\") " Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.774555 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-console-config" (OuterVolumeSpecName: "console-config") pod "e61d3adb-1166-4fe7-b38d-efbbf01446ea" (UID: "e61d3adb-1166-4fe7-b38d-efbbf01446ea"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.775020 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e61d3adb-1166-4fe7-b38d-efbbf01446ea" (UID: "e61d3adb-1166-4fe7-b38d-efbbf01446ea"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.775185 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e61d3adb-1166-4fe7-b38d-efbbf01446ea" (UID: "e61d3adb-1166-4fe7-b38d-efbbf01446ea"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.776391 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.776432 4718 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.776452 4718 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.776468 4718 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e61d3adb-1166-4fe7-b38d-efbbf01446ea-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.781030 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e61d3adb-1166-4fe7-b38d-efbbf01446ea-kube-api-access-8z25g" (OuterVolumeSpecName: "kube-api-access-8z25g") pod "e61d3adb-1166-4fe7-b38d-efbbf01446ea" (UID: "e61d3adb-1166-4fe7-b38d-efbbf01446ea"). InnerVolumeSpecName "kube-api-access-8z25g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.781343 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e61d3adb-1166-4fe7-b38d-efbbf01446ea-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e61d3adb-1166-4fe7-b38d-efbbf01446ea" (UID: "e61d3adb-1166-4fe7-b38d-efbbf01446ea"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.781579 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e61d3adb-1166-4fe7-b38d-efbbf01446ea-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e61d3adb-1166-4fe7-b38d-efbbf01446ea" (UID: "e61d3adb-1166-4fe7-b38d-efbbf01446ea"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.877955 4718 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e61d3adb-1166-4fe7-b38d-efbbf01446ea-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.877987 4718 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e61d3adb-1166-4fe7-b38d-efbbf01446ea-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:39 crc kubenswrapper[4718]: I0123 16:31:39.877997 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z25g\" (UniqueName: \"kubernetes.io/projected/e61d3adb-1166-4fe7-b38d-efbbf01446ea-kube-api-access-8z25g\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:40 crc kubenswrapper[4718]: I0123 16:31:40.369246 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5c466d7564-7dkhr_e61d3adb-1166-4fe7-b38d-efbbf01446ea/console/0.log" Jan 23 16:31:40 crc kubenswrapper[4718]: I0123 16:31:40.369797 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c466d7564-7dkhr" event={"ID":"e61d3adb-1166-4fe7-b38d-efbbf01446ea","Type":"ContainerDied","Data":"bacb5c2f17e6d2a6012b51d92968c4cf42b6294c16ddaafc7f0589d985d685a7"} Jan 23 16:31:40 crc kubenswrapper[4718]: I0123 16:31:40.369859 4718 scope.go:117] "RemoveContainer" containerID="67b17f6cb013b001ee0896b5d0004543934916a49a6827d678cf260828a6b6f4" Jan 23 16:31:40 crc kubenswrapper[4718]: I0123 16:31:40.370022 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c466d7564-7dkhr" Jan 23 16:31:40 crc kubenswrapper[4718]: I0123 16:31:40.426525 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5c466d7564-7dkhr"] Jan 23 16:31:40 crc kubenswrapper[4718]: I0123 16:31:40.435931 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5c466d7564-7dkhr"] Jan 23 16:31:41 crc kubenswrapper[4718]: I0123 16:31:41.159397 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e61d3adb-1166-4fe7-b38d-efbbf01446ea" path="/var/lib/kubelet/pods/e61d3adb-1166-4fe7-b38d-efbbf01446ea/volumes" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.365494 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s"] Jan 23 16:31:46 crc kubenswrapper[4718]: E0123 16:31:46.366230 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac" containerName="registry-server" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.366245 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac" containerName="registry-server" Jan 23 16:31:46 crc kubenswrapper[4718]: E0123 16:31:46.366266 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac" containerName="extract-content" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.366275 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac" containerName="extract-content" Jan 23 16:31:46 crc kubenswrapper[4718]: E0123 16:31:46.366291 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecc2d21-e511-4323-a21f-48a78800a566" containerName="registry-server" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.366298 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecc2d21-e511-4323-a21f-48a78800a566" containerName="registry-server" Jan 23 16:31:46 crc kubenswrapper[4718]: E0123 16:31:46.366308 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac" containerName="extract-utilities" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.366313 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac" containerName="extract-utilities" Jan 23 16:31:46 crc kubenswrapper[4718]: E0123 16:31:46.366321 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e61d3adb-1166-4fe7-b38d-efbbf01446ea" containerName="console" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.366326 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e61d3adb-1166-4fe7-b38d-efbbf01446ea" containerName="console" Jan 23 16:31:46 crc kubenswrapper[4718]: E0123 16:31:46.366333 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecc2d21-e511-4323-a21f-48a78800a566" containerName="extract-utilities" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.366338 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecc2d21-e511-4323-a21f-48a78800a566" containerName="extract-utilities" Jan 23 16:31:46 crc kubenswrapper[4718]: E0123 16:31:46.366358 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecc2d21-e511-4323-a21f-48a78800a566" containerName="extract-content" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.366364 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecc2d21-e511-4323-a21f-48a78800a566" containerName="extract-content" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.366484 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ecc2d21-e511-4323-a21f-48a78800a566" containerName="registry-server" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.366494 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e61d3adb-1166-4fe7-b38d-efbbf01446ea" containerName="console" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.366502 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f3fac4-ac14-4cf7-91d8-bea82ae5f3ac" containerName="registry-server" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.367559 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.374332 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.380387 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s"] Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.522526 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd644261-7d52-41e2-935a-56fde296d6b3-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s\" (UID: \"dd644261-7d52-41e2-935a-56fde296d6b3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.522650 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd644261-7d52-41e2-935a-56fde296d6b3-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s\" (UID: \"dd644261-7d52-41e2-935a-56fde296d6b3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.522779 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kctvj\" (UniqueName: \"kubernetes.io/projected/dd644261-7d52-41e2-935a-56fde296d6b3-kube-api-access-kctvj\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s\" (UID: \"dd644261-7d52-41e2-935a-56fde296d6b3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.624406 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd644261-7d52-41e2-935a-56fde296d6b3-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s\" (UID: \"dd644261-7d52-41e2-935a-56fde296d6b3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.624492 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kctvj\" (UniqueName: \"kubernetes.io/projected/dd644261-7d52-41e2-935a-56fde296d6b3-kube-api-access-kctvj\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s\" (UID: \"dd644261-7d52-41e2-935a-56fde296d6b3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.624592 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd644261-7d52-41e2-935a-56fde296d6b3-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s\" (UID: \"dd644261-7d52-41e2-935a-56fde296d6b3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.625285 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd644261-7d52-41e2-935a-56fde296d6b3-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s\" (UID: \"dd644261-7d52-41e2-935a-56fde296d6b3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.625335 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd644261-7d52-41e2-935a-56fde296d6b3-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s\" (UID: \"dd644261-7d52-41e2-935a-56fde296d6b3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.647220 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kctvj\" (UniqueName: \"kubernetes.io/projected/dd644261-7d52-41e2-935a-56fde296d6b3-kube-api-access-kctvj\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s\" (UID: \"dd644261-7d52-41e2-935a-56fde296d6b3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s" Jan 23 16:31:46 crc kubenswrapper[4718]: I0123 16:31:46.683417 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s" Jan 23 16:31:47 crc kubenswrapper[4718]: I0123 16:31:47.152753 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s"] Jan 23 16:31:47 crc kubenswrapper[4718]: I0123 16:31:47.432976 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s" event={"ID":"dd644261-7d52-41e2-935a-56fde296d6b3","Type":"ContainerStarted","Data":"8719074d9b2d7df97bba8922999d4ce0542ebc72b51ae9e092d094ba3352c29f"} Jan 23 16:31:48 crc kubenswrapper[4718]: I0123 16:31:48.441543 4718 generic.go:334] "Generic (PLEG): container finished" podID="dd644261-7d52-41e2-935a-56fde296d6b3" containerID="0247df6b4d9976f87fbb867224d182c7e2cbda447301d7fdabd78c2a1c403561" exitCode=0 Jan 23 16:31:48 crc kubenswrapper[4718]: I0123 16:31:48.441590 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s" event={"ID":"dd644261-7d52-41e2-935a-56fde296d6b3","Type":"ContainerDied","Data":"0247df6b4d9976f87fbb867224d182c7e2cbda447301d7fdabd78c2a1c403561"} Jan 23 16:31:50 crc kubenswrapper[4718]: I0123 16:31:50.463143 4718 generic.go:334] "Generic (PLEG): container finished" podID="dd644261-7d52-41e2-935a-56fde296d6b3" containerID="d9bd6c89d1d4d2e92b542c7afda02a236cd73e2709955bc1fad4262f9ef6e04e" exitCode=0 Jan 23 16:31:50 crc kubenswrapper[4718]: I0123 16:31:50.463360 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s" event={"ID":"dd644261-7d52-41e2-935a-56fde296d6b3","Type":"ContainerDied","Data":"d9bd6c89d1d4d2e92b542c7afda02a236cd73e2709955bc1fad4262f9ef6e04e"} Jan 23 16:31:51 crc kubenswrapper[4718]: I0123 16:31:51.472355 4718 generic.go:334] "Generic (PLEG): container finished" podID="dd644261-7d52-41e2-935a-56fde296d6b3" containerID="67fb965d701f4c1bbfb3c52c27652a7cd5a11aafed58ab40b05f88af94c20626" exitCode=0 Jan 23 16:31:51 crc kubenswrapper[4718]: I0123 16:31:51.472679 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s" event={"ID":"dd644261-7d52-41e2-935a-56fde296d6b3","Type":"ContainerDied","Data":"67fb965d701f4c1bbfb3c52c27652a7cd5a11aafed58ab40b05f88af94c20626"} Jan 23 16:31:52 crc kubenswrapper[4718]: I0123 16:31:52.934475 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s" Jan 23 16:31:53 crc kubenswrapper[4718]: I0123 16:31:53.048333 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kctvj\" (UniqueName: \"kubernetes.io/projected/dd644261-7d52-41e2-935a-56fde296d6b3-kube-api-access-kctvj\") pod \"dd644261-7d52-41e2-935a-56fde296d6b3\" (UID: \"dd644261-7d52-41e2-935a-56fde296d6b3\") " Jan 23 16:31:53 crc kubenswrapper[4718]: I0123 16:31:53.048437 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd644261-7d52-41e2-935a-56fde296d6b3-util\") pod \"dd644261-7d52-41e2-935a-56fde296d6b3\" (UID: \"dd644261-7d52-41e2-935a-56fde296d6b3\") " Jan 23 16:31:53 crc kubenswrapper[4718]: I0123 16:31:53.048568 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd644261-7d52-41e2-935a-56fde296d6b3-bundle\") pod \"dd644261-7d52-41e2-935a-56fde296d6b3\" (UID: \"dd644261-7d52-41e2-935a-56fde296d6b3\") " Jan 23 16:31:53 crc kubenswrapper[4718]: I0123 16:31:53.049882 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd644261-7d52-41e2-935a-56fde296d6b3-bundle" (OuterVolumeSpecName: "bundle") pod "dd644261-7d52-41e2-935a-56fde296d6b3" (UID: "dd644261-7d52-41e2-935a-56fde296d6b3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:31:53 crc kubenswrapper[4718]: I0123 16:31:53.054017 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd644261-7d52-41e2-935a-56fde296d6b3-kube-api-access-kctvj" (OuterVolumeSpecName: "kube-api-access-kctvj") pod "dd644261-7d52-41e2-935a-56fde296d6b3" (UID: "dd644261-7d52-41e2-935a-56fde296d6b3"). InnerVolumeSpecName "kube-api-access-kctvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:31:53 crc kubenswrapper[4718]: I0123 16:31:53.059113 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd644261-7d52-41e2-935a-56fde296d6b3-util" (OuterVolumeSpecName: "util") pod "dd644261-7d52-41e2-935a-56fde296d6b3" (UID: "dd644261-7d52-41e2-935a-56fde296d6b3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:31:53 crc kubenswrapper[4718]: I0123 16:31:53.151292 4718 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd644261-7d52-41e2-935a-56fde296d6b3-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:53 crc kubenswrapper[4718]: I0123 16:31:53.151384 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kctvj\" (UniqueName: \"kubernetes.io/projected/dd644261-7d52-41e2-935a-56fde296d6b3-kube-api-access-kctvj\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:53 crc kubenswrapper[4718]: I0123 16:31:53.151410 4718 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd644261-7d52-41e2-935a-56fde296d6b3-util\") on node \"crc\" DevicePath \"\"" Jan 23 16:31:53 crc kubenswrapper[4718]: I0123 16:31:53.496490 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s" event={"ID":"dd644261-7d52-41e2-935a-56fde296d6b3","Type":"ContainerDied","Data":"8719074d9b2d7df97bba8922999d4ce0542ebc72b51ae9e092d094ba3352c29f"} Jan 23 16:31:53 crc kubenswrapper[4718]: I0123 16:31:53.496540 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8719074d9b2d7df97bba8922999d4ce0542ebc72b51ae9e092d094ba3352c29f" Jan 23 16:31:53 crc kubenswrapper[4718]: I0123 16:31:53.496579 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.717338 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn"] Jan 23 16:32:02 crc kubenswrapper[4718]: E0123 16:32:02.718056 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd644261-7d52-41e2-935a-56fde296d6b3" containerName="pull" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.718068 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd644261-7d52-41e2-935a-56fde296d6b3" containerName="pull" Jan 23 16:32:02 crc kubenswrapper[4718]: E0123 16:32:02.718088 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd644261-7d52-41e2-935a-56fde296d6b3" containerName="util" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.718094 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd644261-7d52-41e2-935a-56fde296d6b3" containerName="util" Jan 23 16:32:02 crc kubenswrapper[4718]: E0123 16:32:02.718104 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd644261-7d52-41e2-935a-56fde296d6b3" containerName="extract" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.718110 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd644261-7d52-41e2-935a-56fde296d6b3" containerName="extract" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.718243 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd644261-7d52-41e2-935a-56fde296d6b3" containerName="extract" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.718987 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.731690 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.731731 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.731752 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.732581 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.734881 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-c9bnw" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.737129 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn"] Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.768384 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbjrr\" (UniqueName: \"kubernetes.io/projected/7eb6e283-9137-4b68-88b1-9a9dccb9fcd5-kube-api-access-pbjrr\") pod \"metallb-operator-controller-manager-85fcf7954b-5fmcn\" (UID: \"7eb6e283-9137-4b68-88b1-9a9dccb9fcd5\") " pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.768837 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7eb6e283-9137-4b68-88b1-9a9dccb9fcd5-webhook-cert\") pod \"metallb-operator-controller-manager-85fcf7954b-5fmcn\" (UID: \"7eb6e283-9137-4b68-88b1-9a9dccb9fcd5\") " pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.769107 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7eb6e283-9137-4b68-88b1-9a9dccb9fcd5-apiservice-cert\") pod \"metallb-operator-controller-manager-85fcf7954b-5fmcn\" (UID: \"7eb6e283-9137-4b68-88b1-9a9dccb9fcd5\") " pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.875482 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7eb6e283-9137-4b68-88b1-9a9dccb9fcd5-apiservice-cert\") pod \"metallb-operator-controller-manager-85fcf7954b-5fmcn\" (UID: \"7eb6e283-9137-4b68-88b1-9a9dccb9fcd5\") " pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.875585 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbjrr\" (UniqueName: \"kubernetes.io/projected/7eb6e283-9137-4b68-88b1-9a9dccb9fcd5-kube-api-access-pbjrr\") pod \"metallb-operator-controller-manager-85fcf7954b-5fmcn\" (UID: \"7eb6e283-9137-4b68-88b1-9a9dccb9fcd5\") " pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.875650 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7eb6e283-9137-4b68-88b1-9a9dccb9fcd5-webhook-cert\") pod \"metallb-operator-controller-manager-85fcf7954b-5fmcn\" (UID: \"7eb6e283-9137-4b68-88b1-9a9dccb9fcd5\") " pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.887792 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7eb6e283-9137-4b68-88b1-9a9dccb9fcd5-apiservice-cert\") pod \"metallb-operator-controller-manager-85fcf7954b-5fmcn\" (UID: \"7eb6e283-9137-4b68-88b1-9a9dccb9fcd5\") " pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.892200 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7eb6e283-9137-4b68-88b1-9a9dccb9fcd5-webhook-cert\") pod \"metallb-operator-controller-manager-85fcf7954b-5fmcn\" (UID: \"7eb6e283-9137-4b68-88b1-9a9dccb9fcd5\") " pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" Jan 23 16:32:02 crc kubenswrapper[4718]: I0123 16:32:02.892562 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbjrr\" (UniqueName: \"kubernetes.io/projected/7eb6e283-9137-4b68-88b1-9a9dccb9fcd5-kube-api-access-pbjrr\") pod \"metallb-operator-controller-manager-85fcf7954b-5fmcn\" (UID: \"7eb6e283-9137-4b68-88b1-9a9dccb9fcd5\") " pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.001517 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr"] Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.008970 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr" Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.011179 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.011656 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-kg59j" Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.014868 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.045429 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr"] Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.056833 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.081859 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97jzx\" (UniqueName: \"kubernetes.io/projected/b6a8f377-b8e9-4241-a0fa-b40031d27cd7-kube-api-access-97jzx\") pod \"metallb-operator-webhook-server-6865b95b75-sk5rr\" (UID: \"b6a8f377-b8e9-4241-a0fa-b40031d27cd7\") " pod="metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr" Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.081903 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6a8f377-b8e9-4241-a0fa-b40031d27cd7-webhook-cert\") pod \"metallb-operator-webhook-server-6865b95b75-sk5rr\" (UID: \"b6a8f377-b8e9-4241-a0fa-b40031d27cd7\") " pod="metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr" Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.081930 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6a8f377-b8e9-4241-a0fa-b40031d27cd7-apiservice-cert\") pod \"metallb-operator-webhook-server-6865b95b75-sk5rr\" (UID: \"b6a8f377-b8e9-4241-a0fa-b40031d27cd7\") " pod="metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr" Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.183569 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97jzx\" (UniqueName: \"kubernetes.io/projected/b6a8f377-b8e9-4241-a0fa-b40031d27cd7-kube-api-access-97jzx\") pod \"metallb-operator-webhook-server-6865b95b75-sk5rr\" (UID: \"b6a8f377-b8e9-4241-a0fa-b40031d27cd7\") " pod="metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr" Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.183624 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6a8f377-b8e9-4241-a0fa-b40031d27cd7-webhook-cert\") pod \"metallb-operator-webhook-server-6865b95b75-sk5rr\" (UID: \"b6a8f377-b8e9-4241-a0fa-b40031d27cd7\") " pod="metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr" Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.183682 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6a8f377-b8e9-4241-a0fa-b40031d27cd7-apiservice-cert\") pod \"metallb-operator-webhook-server-6865b95b75-sk5rr\" (UID: \"b6a8f377-b8e9-4241-a0fa-b40031d27cd7\") " pod="metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr" Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.196328 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6a8f377-b8e9-4241-a0fa-b40031d27cd7-apiservice-cert\") pod \"metallb-operator-webhook-server-6865b95b75-sk5rr\" (UID: \"b6a8f377-b8e9-4241-a0fa-b40031d27cd7\") " pod="metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr" Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.197148 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6a8f377-b8e9-4241-a0fa-b40031d27cd7-webhook-cert\") pod \"metallb-operator-webhook-server-6865b95b75-sk5rr\" (UID: \"b6a8f377-b8e9-4241-a0fa-b40031d27cd7\") " pod="metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr" Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.211918 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97jzx\" (UniqueName: \"kubernetes.io/projected/b6a8f377-b8e9-4241-a0fa-b40031d27cd7-kube-api-access-97jzx\") pod \"metallb-operator-webhook-server-6865b95b75-sk5rr\" (UID: \"b6a8f377-b8e9-4241-a0fa-b40031d27cd7\") " pod="metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr" Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.330955 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr" Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.616726 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn"] Jan 23 16:32:03 crc kubenswrapper[4718]: I0123 16:32:03.945730 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr"] Jan 23 16:32:04 crc kubenswrapper[4718]: I0123 16:32:04.597483 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr" event={"ID":"b6a8f377-b8e9-4241-a0fa-b40031d27cd7","Type":"ContainerStarted","Data":"c1932e7d2365fad62224f7d57f2b6fa8769376abc02bbac1cca6487519012922"} Jan 23 16:32:04 crc kubenswrapper[4718]: I0123 16:32:04.599016 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" event={"ID":"7eb6e283-9137-4b68-88b1-9a9dccb9fcd5","Type":"ContainerStarted","Data":"d3feb8a9a9d07ada06484c3d3c5499150390fa26b617cd2e1bb2efccd62c3602"} Jan 23 16:32:09 crc kubenswrapper[4718]: I0123 16:32:09.647110 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr" event={"ID":"b6a8f377-b8e9-4241-a0fa-b40031d27cd7","Type":"ContainerStarted","Data":"74dc5e3454094366c55d1676d57345e689a6aa368b4d523ef43f310dafa5507e"} Jan 23 16:32:09 crc kubenswrapper[4718]: I0123 16:32:09.647759 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr" Jan 23 16:32:09 crc kubenswrapper[4718]: I0123 16:32:09.648328 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" event={"ID":"7eb6e283-9137-4b68-88b1-9a9dccb9fcd5","Type":"ContainerStarted","Data":"933e811a6a4ca3cc9e0fb12d35f4203ca18ba4e935df79f1eeee3451b7dfd55d"} Jan 23 16:32:09 crc kubenswrapper[4718]: I0123 16:32:09.648544 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" Jan 23 16:32:09 crc kubenswrapper[4718]: I0123 16:32:09.678613 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr" podStartSLOduration=2.61201498 podStartE2EDuration="7.678588988s" podCreationTimestamp="2026-01-23 16:32:02 +0000 UTC" firstStartedPulling="2026-01-23 16:32:03.949009476 +0000 UTC m=+925.096251487" lastFinishedPulling="2026-01-23 16:32:09.015583504 +0000 UTC m=+930.162825495" observedRunningTime="2026-01-23 16:32:09.666925482 +0000 UTC m=+930.814167483" watchObservedRunningTime="2026-01-23 16:32:09.678588988 +0000 UTC m=+930.825831009" Jan 23 16:32:23 crc kubenswrapper[4718]: I0123 16:32:23.336730 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6865b95b75-sk5rr" Jan 23 16:32:23 crc kubenswrapper[4718]: I0123 16:32:23.362956 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" podStartSLOduration=15.997558488 podStartE2EDuration="21.362941101s" podCreationTimestamp="2026-01-23 16:32:02 +0000 UTC" firstStartedPulling="2026-01-23 16:32:03.628335238 +0000 UTC m=+924.775577229" lastFinishedPulling="2026-01-23 16:32:08.993717851 +0000 UTC m=+930.140959842" observedRunningTime="2026-01-23 16:32:09.699034563 +0000 UTC m=+930.846276554" watchObservedRunningTime="2026-01-23 16:32:23.362941101 +0000 UTC m=+944.510183092" Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.062132 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.885435 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-2kdmh"] Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.886888 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2kdmh" Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.890738 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-zxtj4" Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.891878 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-dk28g"] Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.896568 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.896825 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.897844 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-2kdmh"] Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.898868 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.900484 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.986443 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e444ff80-712b-463e-9fb2-646835a025f9-metrics\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.986519 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e444ff80-712b-463e-9fb2-646835a025f9-frr-sockets\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.986683 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e444ff80-712b-463e-9fb2-646835a025f9-frr-startup\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.986866 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e444ff80-712b-463e-9fb2-646835a025f9-metrics-certs\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.986935 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2txsz\" (UniqueName: \"kubernetes.io/projected/0a04951a-b116-4c6f-ad48-4742051ef181-kube-api-access-2txsz\") pod \"frr-k8s-webhook-server-7df86c4f6c-2kdmh\" (UID: \"0a04951a-b116-4c6f-ad48-4742051ef181\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2kdmh" Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.987016 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbdps\" (UniqueName: \"kubernetes.io/projected/e444ff80-712b-463e-9fb2-646835a025f9-kube-api-access-tbdps\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.987181 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e444ff80-712b-463e-9fb2-646835a025f9-reloader\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.987310 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a04951a-b116-4c6f-ad48-4742051ef181-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2kdmh\" (UID: \"0a04951a-b116-4c6f-ad48-4742051ef181\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2kdmh" Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.987346 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e444ff80-712b-463e-9fb2-646835a025f9-frr-conf\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:43 crc kubenswrapper[4718]: I0123 16:32:43.996989 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-rfc5b"] Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.001336 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rfc5b" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.020904 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.021060 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-wf5nc" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.021311 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.021534 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.070656 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-pd8td"] Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.073825 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-pd8td" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.077504 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.089292 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-pd8td"] Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.089288 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e444ff80-712b-463e-9fb2-646835a025f9-frr-startup\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.089394 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b47f2ba5-694f-4929-9932-a844b35ba149-memberlist\") pod \"speaker-rfc5b\" (UID: \"b47f2ba5-694f-4929-9932-a844b35ba149\") " pod="metallb-system/speaker-rfc5b" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.089436 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e444ff80-712b-463e-9fb2-646835a025f9-metrics-certs\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.089459 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2txsz\" (UniqueName: \"kubernetes.io/projected/0a04951a-b116-4c6f-ad48-4742051ef181-kube-api-access-2txsz\") pod \"frr-k8s-webhook-server-7df86c4f6c-2kdmh\" (UID: \"0a04951a-b116-4c6f-ad48-4742051ef181\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2kdmh" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.090820 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbdps\" (UniqueName: \"kubernetes.io/projected/e444ff80-712b-463e-9fb2-646835a025f9-kube-api-access-tbdps\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.090861 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8smwd\" (UniqueName: \"kubernetes.io/projected/b47f2ba5-694f-4929-9932-a844b35ba149-kube-api-access-8smwd\") pod \"speaker-rfc5b\" (UID: \"b47f2ba5-694f-4929-9932-a844b35ba149\") " pod="metallb-system/speaker-rfc5b" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.090198 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e444ff80-712b-463e-9fb2-646835a025f9-frr-startup\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.090884 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b47f2ba5-694f-4929-9932-a844b35ba149-metallb-excludel2\") pod \"speaker-rfc5b\" (UID: \"b47f2ba5-694f-4929-9932-a844b35ba149\") " pod="metallb-system/speaker-rfc5b" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.091018 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e444ff80-712b-463e-9fb2-646835a025f9-reloader\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.091062 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a04951a-b116-4c6f-ad48-4742051ef181-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2kdmh\" (UID: \"0a04951a-b116-4c6f-ad48-4742051ef181\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2kdmh" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.091080 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e444ff80-712b-463e-9fb2-646835a025f9-frr-conf\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.091120 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e444ff80-712b-463e-9fb2-646835a025f9-metrics\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.091163 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b47f2ba5-694f-4929-9932-a844b35ba149-metrics-certs\") pod \"speaker-rfc5b\" (UID: \"b47f2ba5-694f-4929-9932-a844b35ba149\") " pod="metallb-system/speaker-rfc5b" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.091184 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e444ff80-712b-463e-9fb2-646835a025f9-frr-sockets\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.091460 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e444ff80-712b-463e-9fb2-646835a025f9-frr-sockets\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.091828 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e444ff80-712b-463e-9fb2-646835a025f9-frr-conf\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.091962 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e444ff80-712b-463e-9fb2-646835a025f9-reloader\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.092102 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e444ff80-712b-463e-9fb2-646835a025f9-metrics\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.101661 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a04951a-b116-4c6f-ad48-4742051ef181-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2kdmh\" (UID: \"0a04951a-b116-4c6f-ad48-4742051ef181\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2kdmh" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.108457 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e444ff80-712b-463e-9fb2-646835a025f9-metrics-certs\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.113372 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2txsz\" (UniqueName: \"kubernetes.io/projected/0a04951a-b116-4c6f-ad48-4742051ef181-kube-api-access-2txsz\") pod \"frr-k8s-webhook-server-7df86c4f6c-2kdmh\" (UID: \"0a04951a-b116-4c6f-ad48-4742051ef181\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2kdmh" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.117011 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbdps\" (UniqueName: \"kubernetes.io/projected/e444ff80-712b-463e-9fb2-646835a025f9-kube-api-access-tbdps\") pod \"frr-k8s-dk28g\" (UID: \"e444ff80-712b-463e-9fb2-646835a025f9\") " pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.193738 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e7b3c6e-a339-4412-aecf-1091bfc315a5-cert\") pod \"controller-6968d8fdc4-pd8td\" (UID: \"9e7b3c6e-a339-4412-aecf-1091bfc315a5\") " pod="metallb-system/controller-6968d8fdc4-pd8td" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.194355 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b47f2ba5-694f-4929-9932-a844b35ba149-memberlist\") pod \"speaker-rfc5b\" (UID: \"b47f2ba5-694f-4929-9932-a844b35ba149\") " pod="metallb-system/speaker-rfc5b" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.194418 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8smwd\" (UniqueName: \"kubernetes.io/projected/b47f2ba5-694f-4929-9932-a844b35ba149-kube-api-access-8smwd\") pod \"speaker-rfc5b\" (UID: \"b47f2ba5-694f-4929-9932-a844b35ba149\") " pod="metallb-system/speaker-rfc5b" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.194450 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b47f2ba5-694f-4929-9932-a844b35ba149-metallb-excludel2\") pod \"speaker-rfc5b\" (UID: \"b47f2ba5-694f-4929-9932-a844b35ba149\") " pod="metallb-system/speaker-rfc5b" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.194559 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b47f2ba5-694f-4929-9932-a844b35ba149-metrics-certs\") pod \"speaker-rfc5b\" (UID: \"b47f2ba5-694f-4929-9932-a844b35ba149\") " pod="metallb-system/speaker-rfc5b" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.194598 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb7b9\" (UniqueName: \"kubernetes.io/projected/9e7b3c6e-a339-4412-aecf-1091bfc315a5-kube-api-access-pb7b9\") pod \"controller-6968d8fdc4-pd8td\" (UID: \"9e7b3c6e-a339-4412-aecf-1091bfc315a5\") " pod="metallb-system/controller-6968d8fdc4-pd8td" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.194659 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e7b3c6e-a339-4412-aecf-1091bfc315a5-metrics-certs\") pod \"controller-6968d8fdc4-pd8td\" (UID: \"9e7b3c6e-a339-4412-aecf-1091bfc315a5\") " pod="metallb-system/controller-6968d8fdc4-pd8td" Jan 23 16:32:44 crc kubenswrapper[4718]: E0123 16:32:44.194989 4718 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 16:32:44 crc kubenswrapper[4718]: E0123 16:32:44.195095 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b47f2ba5-694f-4929-9932-a844b35ba149-memberlist podName:b47f2ba5-694f-4929-9932-a844b35ba149 nodeName:}" failed. No retries permitted until 2026-01-23 16:32:44.695071926 +0000 UTC m=+965.842313917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b47f2ba5-694f-4929-9932-a844b35ba149-memberlist") pod "speaker-rfc5b" (UID: "b47f2ba5-694f-4929-9932-a844b35ba149") : secret "metallb-memberlist" not found Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.195568 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b47f2ba5-694f-4929-9932-a844b35ba149-metallb-excludel2\") pod \"speaker-rfc5b\" (UID: \"b47f2ba5-694f-4929-9932-a844b35ba149\") " pod="metallb-system/speaker-rfc5b" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.202125 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b47f2ba5-694f-4929-9932-a844b35ba149-metrics-certs\") pod \"speaker-rfc5b\" (UID: \"b47f2ba5-694f-4929-9932-a844b35ba149\") " pod="metallb-system/speaker-rfc5b" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.206259 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2kdmh" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.216847 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.223126 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8smwd\" (UniqueName: \"kubernetes.io/projected/b47f2ba5-694f-4929-9932-a844b35ba149-kube-api-access-8smwd\") pod \"speaker-rfc5b\" (UID: \"b47f2ba5-694f-4929-9932-a844b35ba149\") " pod="metallb-system/speaker-rfc5b" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.295829 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb7b9\" (UniqueName: \"kubernetes.io/projected/9e7b3c6e-a339-4412-aecf-1091bfc315a5-kube-api-access-pb7b9\") pod \"controller-6968d8fdc4-pd8td\" (UID: \"9e7b3c6e-a339-4412-aecf-1091bfc315a5\") " pod="metallb-system/controller-6968d8fdc4-pd8td" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.295909 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e7b3c6e-a339-4412-aecf-1091bfc315a5-metrics-certs\") pod \"controller-6968d8fdc4-pd8td\" (UID: \"9e7b3c6e-a339-4412-aecf-1091bfc315a5\") " pod="metallb-system/controller-6968d8fdc4-pd8td" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.295943 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e7b3c6e-a339-4412-aecf-1091bfc315a5-cert\") pod \"controller-6968d8fdc4-pd8td\" (UID: \"9e7b3c6e-a339-4412-aecf-1091bfc315a5\") " pod="metallb-system/controller-6968d8fdc4-pd8td" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.298277 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.299571 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e7b3c6e-a339-4412-aecf-1091bfc315a5-metrics-certs\") pod \"controller-6968d8fdc4-pd8td\" (UID: \"9e7b3c6e-a339-4412-aecf-1091bfc315a5\") " pod="metallb-system/controller-6968d8fdc4-pd8td" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.311591 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e7b3c6e-a339-4412-aecf-1091bfc315a5-cert\") pod \"controller-6968d8fdc4-pd8td\" (UID: \"9e7b3c6e-a339-4412-aecf-1091bfc315a5\") " pod="metallb-system/controller-6968d8fdc4-pd8td" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.318923 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb7b9\" (UniqueName: \"kubernetes.io/projected/9e7b3c6e-a339-4412-aecf-1091bfc315a5-kube-api-access-pb7b9\") pod \"controller-6968d8fdc4-pd8td\" (UID: \"9e7b3c6e-a339-4412-aecf-1091bfc315a5\") " pod="metallb-system/controller-6968d8fdc4-pd8td" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.415427 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-pd8td" Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.685792 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-2kdmh"] Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.706407 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b47f2ba5-694f-4929-9932-a844b35ba149-memberlist\") pod \"speaker-rfc5b\" (UID: \"b47f2ba5-694f-4929-9932-a844b35ba149\") " pod="metallb-system/speaker-rfc5b" Jan 23 16:32:44 crc kubenswrapper[4718]: E0123 16:32:44.707149 4718 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 16:32:44 crc kubenswrapper[4718]: E0123 16:32:44.707428 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b47f2ba5-694f-4929-9932-a844b35ba149-memberlist podName:b47f2ba5-694f-4929-9932-a844b35ba149 nodeName:}" failed. No retries permitted until 2026-01-23 16:32:45.70730448 +0000 UTC m=+966.854546501 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b47f2ba5-694f-4929-9932-a844b35ba149-memberlist") pod "speaker-rfc5b" (UID: "b47f2ba5-694f-4929-9932-a844b35ba149") : secret "metallb-memberlist" not found Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.864462 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-pd8td"] Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.946305 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-pd8td" event={"ID":"9e7b3c6e-a339-4412-aecf-1091bfc315a5","Type":"ContainerStarted","Data":"7c948ad7ae964aad71953cb31f1c1da902cf86ae11dee29c3cbf75d4709fedbf"} Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.947795 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2kdmh" event={"ID":"0a04951a-b116-4c6f-ad48-4742051ef181","Type":"ContainerStarted","Data":"7d0f44584e1875f71573366330cb86992392210c9dffe0ce5ea4ae3821fe6054"} Jan 23 16:32:44 crc kubenswrapper[4718]: I0123 16:32:44.949354 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dk28g" event={"ID":"e444ff80-712b-463e-9fb2-646835a025f9","Type":"ContainerStarted","Data":"a5ae1f4aecdedd8861430d4d7c6472d3d4d17b26d9fd243488ba03df5f133812"} Jan 23 16:32:45 crc kubenswrapper[4718]: I0123 16:32:45.730695 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b47f2ba5-694f-4929-9932-a844b35ba149-memberlist\") pod \"speaker-rfc5b\" (UID: \"b47f2ba5-694f-4929-9932-a844b35ba149\") " pod="metallb-system/speaker-rfc5b" Jan 23 16:32:45 crc kubenswrapper[4718]: I0123 16:32:45.740756 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b47f2ba5-694f-4929-9932-a844b35ba149-memberlist\") pod \"speaker-rfc5b\" (UID: \"b47f2ba5-694f-4929-9932-a844b35ba149\") " pod="metallb-system/speaker-rfc5b" Jan 23 16:32:45 crc kubenswrapper[4718]: I0123 16:32:45.868065 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rfc5b" Jan 23 16:32:45 crc kubenswrapper[4718]: W0123 16:32:45.904521 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb47f2ba5_694f_4929_9932_a844b35ba149.slice/crio-3e73873b4a1cbbc5a20e074670cd11181932b27b3c9e0e4042a271feadb59458 WatchSource:0}: Error finding container 3e73873b4a1cbbc5a20e074670cd11181932b27b3c9e0e4042a271feadb59458: Status 404 returned error can't find the container with id 3e73873b4a1cbbc5a20e074670cd11181932b27b3c9e0e4042a271feadb59458 Jan 23 16:32:45 crc kubenswrapper[4718]: I0123 16:32:45.961331 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-pd8td" event={"ID":"9e7b3c6e-a339-4412-aecf-1091bfc315a5","Type":"ContainerStarted","Data":"872ed57281067816033fc3b04ecbff7db4bf73a47416019e6b75e031b0c951eb"} Jan 23 16:32:45 crc kubenswrapper[4718]: I0123 16:32:45.961426 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-pd8td" event={"ID":"9e7b3c6e-a339-4412-aecf-1091bfc315a5","Type":"ContainerStarted","Data":"0d9c0de93ac8de5c23fece796dcdcb55f43b62c68f380c78dfedb05a123903c9"} Jan 23 16:32:45 crc kubenswrapper[4718]: I0123 16:32:45.961504 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-pd8td" Jan 23 16:32:45 crc kubenswrapper[4718]: I0123 16:32:45.965957 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rfc5b" event={"ID":"b47f2ba5-694f-4929-9932-a844b35ba149","Type":"ContainerStarted","Data":"3e73873b4a1cbbc5a20e074670cd11181932b27b3c9e0e4042a271feadb59458"} Jan 23 16:32:45 crc kubenswrapper[4718]: I0123 16:32:45.997895 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-pd8td" podStartSLOduration=2.997857008 podStartE2EDuration="2.997857008s" podCreationTimestamp="2026-01-23 16:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:32:45.986840339 +0000 UTC m=+967.134082340" watchObservedRunningTime="2026-01-23 16:32:45.997857008 +0000 UTC m=+967.145098989" Jan 23 16:32:46 crc kubenswrapper[4718]: I0123 16:32:46.975318 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rfc5b" event={"ID":"b47f2ba5-694f-4929-9932-a844b35ba149","Type":"ContainerStarted","Data":"fffcc3afe220a82f5d4df3787eeaac28e8280d4435a0d700367bda79980b3206"} Jan 23 16:32:46 crc kubenswrapper[4718]: I0123 16:32:46.975708 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rfc5b" event={"ID":"b47f2ba5-694f-4929-9932-a844b35ba149","Type":"ContainerStarted","Data":"c3de13fed96a891a21cd38ac915c8c3f9e8d6a831e785713b7438b1509b54e87"} Jan 23 16:32:46 crc kubenswrapper[4718]: I0123 16:32:46.999693 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-rfc5b" podStartSLOduration=3.999675834 podStartE2EDuration="3.999675834s" podCreationTimestamp="2026-01-23 16:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:32:46.99587262 +0000 UTC m=+968.143114621" watchObservedRunningTime="2026-01-23 16:32:46.999675834 +0000 UTC m=+968.146917825" Jan 23 16:32:47 crc kubenswrapper[4718]: I0123 16:32:47.982148 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-rfc5b" Jan 23 16:32:53 crc kubenswrapper[4718]: I0123 16:32:53.032575 4718 generic.go:334] "Generic (PLEG): container finished" podID="e444ff80-712b-463e-9fb2-646835a025f9" containerID="adb7840aa3265f9d99415a6bf9c14c528c359baebb6afd940fe393a1f0adca01" exitCode=0 Jan 23 16:32:53 crc kubenswrapper[4718]: I0123 16:32:53.032732 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dk28g" event={"ID":"e444ff80-712b-463e-9fb2-646835a025f9","Type":"ContainerDied","Data":"adb7840aa3265f9d99415a6bf9c14c528c359baebb6afd940fe393a1f0adca01"} Jan 23 16:32:53 crc kubenswrapper[4718]: I0123 16:32:53.035848 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2kdmh" event={"ID":"0a04951a-b116-4c6f-ad48-4742051ef181","Type":"ContainerStarted","Data":"3222e364e39ef8f4810968a68ae50524cb6829971a89688bbbcbbcd5e3927abc"} Jan 23 16:32:53 crc kubenswrapper[4718]: I0123 16:32:53.036040 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2kdmh" Jan 23 16:32:54 crc kubenswrapper[4718]: I0123 16:32:54.046584 4718 generic.go:334] "Generic (PLEG): container finished" podID="e444ff80-712b-463e-9fb2-646835a025f9" containerID="0b8aa129fb8c225fb4d7512e92b4885964b14bfd16e44efee5cd92121f270faa" exitCode=0 Jan 23 16:32:54 crc kubenswrapper[4718]: I0123 16:32:54.046714 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dk28g" event={"ID":"e444ff80-712b-463e-9fb2-646835a025f9","Type":"ContainerDied","Data":"0b8aa129fb8c225fb4d7512e92b4885964b14bfd16e44efee5cd92121f270faa"} Jan 23 16:32:54 crc kubenswrapper[4718]: I0123 16:32:54.072856 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2kdmh" podStartSLOduration=3.692620784 podStartE2EDuration="11.072837161s" podCreationTimestamp="2026-01-23 16:32:43 +0000 UTC" firstStartedPulling="2026-01-23 16:32:44.706387236 +0000 UTC m=+965.853629237" lastFinishedPulling="2026-01-23 16:32:52.086603623 +0000 UTC m=+973.233845614" observedRunningTime="2026-01-23 16:32:53.094136932 +0000 UTC m=+974.241378933" watchObservedRunningTime="2026-01-23 16:32:54.072837161 +0000 UTC m=+975.220079152" Jan 23 16:32:55 crc kubenswrapper[4718]: I0123 16:32:55.061873 4718 generic.go:334] "Generic (PLEG): container finished" podID="e444ff80-712b-463e-9fb2-646835a025f9" containerID="05565a0a636653dff7b7ee84dc3b0d2ba7b6a867e08f56128d8fb23c29e9c312" exitCode=0 Jan 23 16:32:55 crc kubenswrapper[4718]: I0123 16:32:55.061968 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dk28g" event={"ID":"e444ff80-712b-463e-9fb2-646835a025f9","Type":"ContainerDied","Data":"05565a0a636653dff7b7ee84dc3b0d2ba7b6a867e08f56128d8fb23c29e9c312"} Jan 23 16:32:56 crc kubenswrapper[4718]: I0123 16:32:56.075617 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dk28g" event={"ID":"e444ff80-712b-463e-9fb2-646835a025f9","Type":"ContainerStarted","Data":"1da803353835b920e9d685b708c7e467be9a0af229e090582fb652c0978796be"} Jan 23 16:32:56 crc kubenswrapper[4718]: I0123 16:32:56.076722 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dk28g" event={"ID":"e444ff80-712b-463e-9fb2-646835a025f9","Type":"ContainerStarted","Data":"bada98884cb225c1d461f4de1213cb6082b05307be02bd3d860507d3a40b05ea"} Jan 23 16:32:56 crc kubenswrapper[4718]: I0123 16:32:56.076738 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dk28g" event={"ID":"e444ff80-712b-463e-9fb2-646835a025f9","Type":"ContainerStarted","Data":"441dcb1a38bb0f7fce31a9b84d8d106cb04a9f24a46038a812afb73b8cf3ba76"} Jan 23 16:32:56 crc kubenswrapper[4718]: I0123 16:32:56.076748 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dk28g" event={"ID":"e444ff80-712b-463e-9fb2-646835a025f9","Type":"ContainerStarted","Data":"89fb8ddaf16686fb011b87fa0a34b0312ff43f81e79b9bff0486c6d418724c5d"} Jan 23 16:32:57 crc kubenswrapper[4718]: I0123 16:32:57.089952 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:57 crc kubenswrapper[4718]: I0123 16:32:57.090713 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dk28g" event={"ID":"e444ff80-712b-463e-9fb2-646835a025f9","Type":"ContainerStarted","Data":"3a4ed235532d60f8b0054acd740f65fdb899ff3d8a402a9008864283fd8f07c2"} Jan 23 16:32:57 crc kubenswrapper[4718]: I0123 16:32:57.090776 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dk28g" event={"ID":"e444ff80-712b-463e-9fb2-646835a025f9","Type":"ContainerStarted","Data":"8b46ee94b96bfb0d39910ac4615a0f5205d3e4c081d39bc037c0f6ad12a6f051"} Jan 23 16:32:57 crc kubenswrapper[4718]: I0123 16:32:57.112252 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-dk28g" podStartSLOduration=6.470711162 podStartE2EDuration="14.112221006s" podCreationTimestamp="2026-01-23 16:32:43 +0000 UTC" firstStartedPulling="2026-01-23 16:32:44.39908519 +0000 UTC m=+965.546327181" lastFinishedPulling="2026-01-23 16:32:52.040594994 +0000 UTC m=+973.187837025" observedRunningTime="2026-01-23 16:32:57.108267578 +0000 UTC m=+978.255509579" watchObservedRunningTime="2026-01-23 16:32:57.112221006 +0000 UTC m=+978.259463037" Jan 23 16:32:59 crc kubenswrapper[4718]: I0123 16:32:59.217526 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-dk28g" Jan 23 16:32:59 crc kubenswrapper[4718]: I0123 16:32:59.298151 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-dk28g" Jan 23 16:33:04 crc kubenswrapper[4718]: I0123 16:33:04.215969 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2kdmh" Jan 23 16:33:04 crc kubenswrapper[4718]: I0123 16:33:04.423895 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-pd8td" Jan 23 16:33:05 crc kubenswrapper[4718]: I0123 16:33:05.875170 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-rfc5b" Jan 23 16:33:09 crc kubenswrapper[4718]: I0123 16:33:09.181763 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qklmc"] Jan 23 16:33:09 crc kubenswrapper[4718]: I0123 16:33:09.189410 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qklmc" Jan 23 16:33:09 crc kubenswrapper[4718]: I0123 16:33:09.203307 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 23 16:33:09 crc kubenswrapper[4718]: I0123 16:33:09.203427 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-t6ld5" Jan 23 16:33:09 crc kubenswrapper[4718]: I0123 16:33:09.203315 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 23 16:33:09 crc kubenswrapper[4718]: I0123 16:33:09.225866 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qklmc"] Jan 23 16:33:09 crc kubenswrapper[4718]: I0123 16:33:09.306203 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9mv6\" (UniqueName: \"kubernetes.io/projected/e4b04640-c903-4f8d-9d51-5b7cdfbde4dc-kube-api-access-f9mv6\") pod \"openstack-operator-index-qklmc\" (UID: \"e4b04640-c903-4f8d-9d51-5b7cdfbde4dc\") " pod="openstack-operators/openstack-operator-index-qklmc" Jan 23 16:33:09 crc kubenswrapper[4718]: I0123 16:33:09.407674 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9mv6\" (UniqueName: \"kubernetes.io/projected/e4b04640-c903-4f8d-9d51-5b7cdfbde4dc-kube-api-access-f9mv6\") pod \"openstack-operator-index-qklmc\" (UID: \"e4b04640-c903-4f8d-9d51-5b7cdfbde4dc\") " pod="openstack-operators/openstack-operator-index-qklmc" Jan 23 16:33:09 crc kubenswrapper[4718]: I0123 16:33:09.441019 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9mv6\" (UniqueName: \"kubernetes.io/projected/e4b04640-c903-4f8d-9d51-5b7cdfbde4dc-kube-api-access-f9mv6\") pod \"openstack-operator-index-qklmc\" (UID: \"e4b04640-c903-4f8d-9d51-5b7cdfbde4dc\") " pod="openstack-operators/openstack-operator-index-qklmc" Jan 23 16:33:09 crc kubenswrapper[4718]: I0123 16:33:09.537419 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qklmc" Jan 23 16:33:09 crc kubenswrapper[4718]: I0123 16:33:09.997739 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qklmc"] Jan 23 16:33:10 crc kubenswrapper[4718]: I0123 16:33:10.233491 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qklmc" event={"ID":"e4b04640-c903-4f8d-9d51-5b7cdfbde4dc","Type":"ContainerStarted","Data":"46e37984ff96ea3441cdb48030df22008bb311d4152f2d3d9beaa8954345c0f4"} Jan 23 16:33:11 crc kubenswrapper[4718]: I0123 16:33:11.932904 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-qklmc"] Jan 23 16:33:12 crc kubenswrapper[4718]: I0123 16:33:12.539697 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zvx6t"] Jan 23 16:33:12 crc kubenswrapper[4718]: I0123 16:33:12.541732 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zvx6t" Jan 23 16:33:12 crc kubenswrapper[4718]: I0123 16:33:12.548605 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zvx6t"] Jan 23 16:33:12 crc kubenswrapper[4718]: I0123 16:33:12.710288 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fq6q\" (UniqueName: \"kubernetes.io/projected/f519ad69-0e68-44c6-9805-40fb66819268-kube-api-access-7fq6q\") pod \"openstack-operator-index-zvx6t\" (UID: \"f519ad69-0e68-44c6-9805-40fb66819268\") " pod="openstack-operators/openstack-operator-index-zvx6t" Jan 23 16:33:12 crc kubenswrapper[4718]: I0123 16:33:12.812446 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fq6q\" (UniqueName: \"kubernetes.io/projected/f519ad69-0e68-44c6-9805-40fb66819268-kube-api-access-7fq6q\") pod \"openstack-operator-index-zvx6t\" (UID: \"f519ad69-0e68-44c6-9805-40fb66819268\") " pod="openstack-operators/openstack-operator-index-zvx6t" Jan 23 16:33:12 crc kubenswrapper[4718]: I0123 16:33:12.831895 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fq6q\" (UniqueName: \"kubernetes.io/projected/f519ad69-0e68-44c6-9805-40fb66819268-kube-api-access-7fq6q\") pod \"openstack-operator-index-zvx6t\" (UID: \"f519ad69-0e68-44c6-9805-40fb66819268\") " pod="openstack-operators/openstack-operator-index-zvx6t" Jan 23 16:33:12 crc kubenswrapper[4718]: I0123 16:33:12.900916 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zvx6t" Jan 23 16:33:13 crc kubenswrapper[4718]: I0123 16:33:13.274839 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qklmc" event={"ID":"e4b04640-c903-4f8d-9d51-5b7cdfbde4dc","Type":"ContainerStarted","Data":"7b3e5e7b9570de47d5ec01b8b6fecaeba8b2db3c3e1cbba27101fddd6862de38"} Jan 23 16:33:13 crc kubenswrapper[4718]: I0123 16:33:13.275036 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-qklmc" podUID="e4b04640-c903-4f8d-9d51-5b7cdfbde4dc" containerName="registry-server" containerID="cri-o://7b3e5e7b9570de47d5ec01b8b6fecaeba8b2db3c3e1cbba27101fddd6862de38" gracePeriod=2 Jan 23 16:33:13 crc kubenswrapper[4718]: I0123 16:33:13.306564 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qklmc" podStartSLOduration=2.214810097 podStartE2EDuration="4.306535445s" podCreationTimestamp="2026-01-23 16:33:09 +0000 UTC" firstStartedPulling="2026-01-23 16:33:10.01085856 +0000 UTC m=+991.158100591" lastFinishedPulling="2026-01-23 16:33:12.102583928 +0000 UTC m=+993.249825939" observedRunningTime="2026-01-23 16:33:13.289761529 +0000 UTC m=+994.437003560" watchObservedRunningTime="2026-01-23 16:33:13.306535445 +0000 UTC m=+994.453777456" Jan 23 16:33:13 crc kubenswrapper[4718]: I0123 16:33:13.352429 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zvx6t"] Jan 23 16:33:13 crc kubenswrapper[4718]: I0123 16:33:13.815921 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qklmc" Jan 23 16:33:13 crc kubenswrapper[4718]: I0123 16:33:13.936000 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9mv6\" (UniqueName: \"kubernetes.io/projected/e4b04640-c903-4f8d-9d51-5b7cdfbde4dc-kube-api-access-f9mv6\") pod \"e4b04640-c903-4f8d-9d51-5b7cdfbde4dc\" (UID: \"e4b04640-c903-4f8d-9d51-5b7cdfbde4dc\") " Jan 23 16:33:13 crc kubenswrapper[4718]: I0123 16:33:13.943640 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4b04640-c903-4f8d-9d51-5b7cdfbde4dc-kube-api-access-f9mv6" (OuterVolumeSpecName: "kube-api-access-f9mv6") pod "e4b04640-c903-4f8d-9d51-5b7cdfbde4dc" (UID: "e4b04640-c903-4f8d-9d51-5b7cdfbde4dc"). InnerVolumeSpecName "kube-api-access-f9mv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:33:14 crc kubenswrapper[4718]: I0123 16:33:14.039411 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9mv6\" (UniqueName: \"kubernetes.io/projected/e4b04640-c903-4f8d-9d51-5b7cdfbde4dc-kube-api-access-f9mv6\") on node \"crc\" DevicePath \"\"" Jan 23 16:33:14 crc kubenswrapper[4718]: I0123 16:33:14.221911 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-dk28g" Jan 23 16:33:14 crc kubenswrapper[4718]: I0123 16:33:14.286439 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zvx6t" event={"ID":"f519ad69-0e68-44c6-9805-40fb66819268","Type":"ContainerStarted","Data":"ff01ffc928224e242073c80831d9508fa9f33122ca27455f17bda913e423701f"} Jan 23 16:33:14 crc kubenswrapper[4718]: I0123 16:33:14.286506 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zvx6t" event={"ID":"f519ad69-0e68-44c6-9805-40fb66819268","Type":"ContainerStarted","Data":"f4a54e3f8f9514f8aab19e6aea06ca7de383c467558cbfd7fabf9bb1351dd8b8"} Jan 23 16:33:14 crc kubenswrapper[4718]: I0123 16:33:14.288425 4718 generic.go:334] "Generic (PLEG): container finished" podID="e4b04640-c903-4f8d-9d51-5b7cdfbde4dc" containerID="7b3e5e7b9570de47d5ec01b8b6fecaeba8b2db3c3e1cbba27101fddd6862de38" exitCode=0 Jan 23 16:33:14 crc kubenswrapper[4718]: I0123 16:33:14.288471 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qklmc" event={"ID":"e4b04640-c903-4f8d-9d51-5b7cdfbde4dc","Type":"ContainerDied","Data":"7b3e5e7b9570de47d5ec01b8b6fecaeba8b2db3c3e1cbba27101fddd6862de38"} Jan 23 16:33:14 crc kubenswrapper[4718]: I0123 16:33:14.288498 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qklmc" event={"ID":"e4b04640-c903-4f8d-9d51-5b7cdfbde4dc","Type":"ContainerDied","Data":"46e37984ff96ea3441cdb48030df22008bb311d4152f2d3d9beaa8954345c0f4"} Jan 23 16:33:14 crc kubenswrapper[4718]: I0123 16:33:14.288516 4718 scope.go:117] "RemoveContainer" containerID="7b3e5e7b9570de47d5ec01b8b6fecaeba8b2db3c3e1cbba27101fddd6862de38" Jan 23 16:33:14 crc kubenswrapper[4718]: I0123 16:33:14.288543 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qklmc" Jan 23 16:33:14 crc kubenswrapper[4718]: I0123 16:33:14.320778 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zvx6t" podStartSLOduration=2.276379023 podStartE2EDuration="2.320751989s" podCreationTimestamp="2026-01-23 16:33:12 +0000 UTC" firstStartedPulling="2026-01-23 16:33:13.404440315 +0000 UTC m=+994.551682306" lastFinishedPulling="2026-01-23 16:33:13.448813281 +0000 UTC m=+994.596055272" observedRunningTime="2026-01-23 16:33:14.311246371 +0000 UTC m=+995.458488372" watchObservedRunningTime="2026-01-23 16:33:14.320751989 +0000 UTC m=+995.467993990" Jan 23 16:33:14 crc kubenswrapper[4718]: I0123 16:33:14.333505 4718 scope.go:117] "RemoveContainer" containerID="7b3e5e7b9570de47d5ec01b8b6fecaeba8b2db3c3e1cbba27101fddd6862de38" Jan 23 16:33:14 crc kubenswrapper[4718]: E0123 16:33:14.334258 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b3e5e7b9570de47d5ec01b8b6fecaeba8b2db3c3e1cbba27101fddd6862de38\": container with ID starting with 7b3e5e7b9570de47d5ec01b8b6fecaeba8b2db3c3e1cbba27101fddd6862de38 not found: ID does not exist" containerID="7b3e5e7b9570de47d5ec01b8b6fecaeba8b2db3c3e1cbba27101fddd6862de38" Jan 23 16:33:14 crc kubenswrapper[4718]: I0123 16:33:14.334306 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b3e5e7b9570de47d5ec01b8b6fecaeba8b2db3c3e1cbba27101fddd6862de38"} err="failed to get container status \"7b3e5e7b9570de47d5ec01b8b6fecaeba8b2db3c3e1cbba27101fddd6862de38\": rpc error: code = NotFound desc = could not find container \"7b3e5e7b9570de47d5ec01b8b6fecaeba8b2db3c3e1cbba27101fddd6862de38\": container with ID starting with 7b3e5e7b9570de47d5ec01b8b6fecaeba8b2db3c3e1cbba27101fddd6862de38 not found: ID does not exist" Jan 23 16:33:14 crc kubenswrapper[4718]: I0123 16:33:14.335782 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-qklmc"] Jan 23 16:33:14 crc kubenswrapper[4718]: I0123 16:33:14.341790 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-qklmc"] Jan 23 16:33:15 crc kubenswrapper[4718]: I0123 16:33:15.156804 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4b04640-c903-4f8d-9d51-5b7cdfbde4dc" path="/var/lib/kubelet/pods/e4b04640-c903-4f8d-9d51-5b7cdfbde4dc/volumes" Jan 23 16:33:22 crc kubenswrapper[4718]: I0123 16:33:22.901067 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-zvx6t" Jan 23 16:33:22 crc kubenswrapper[4718]: I0123 16:33:22.902188 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-zvx6t" Jan 23 16:33:22 crc kubenswrapper[4718]: I0123 16:33:22.959754 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-zvx6t" Jan 23 16:33:23 crc kubenswrapper[4718]: I0123 16:33:23.417318 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-zvx6t" Jan 23 16:33:24 crc kubenswrapper[4718]: I0123 16:33:24.598402 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s"] Jan 23 16:33:24 crc kubenswrapper[4718]: E0123 16:33:24.599255 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4b04640-c903-4f8d-9d51-5b7cdfbde4dc" containerName="registry-server" Jan 23 16:33:24 crc kubenswrapper[4718]: I0123 16:33:24.599271 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b04640-c903-4f8d-9d51-5b7cdfbde4dc" containerName="registry-server" Jan 23 16:33:24 crc kubenswrapper[4718]: I0123 16:33:24.599439 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4b04640-c903-4f8d-9d51-5b7cdfbde4dc" containerName="registry-server" Jan 23 16:33:24 crc kubenswrapper[4718]: I0123 16:33:24.600601 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s" Jan 23 16:33:24 crc kubenswrapper[4718]: I0123 16:33:24.607329 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-74fdh" Jan 23 16:33:24 crc kubenswrapper[4718]: I0123 16:33:24.608536 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s"] Jan 23 16:33:24 crc kubenswrapper[4718]: I0123 16:33:24.681154 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c465f3f4-76f9-4a34-be6d-1bef61e77c8f-bundle\") pod \"ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s\" (UID: \"c465f3f4-76f9-4a34-be6d-1bef61e77c8f\") " pod="openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s" Jan 23 16:33:24 crc kubenswrapper[4718]: I0123 16:33:24.681263 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djmvx\" (UniqueName: \"kubernetes.io/projected/c465f3f4-76f9-4a34-be6d-1bef61e77c8f-kube-api-access-djmvx\") pod \"ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s\" (UID: \"c465f3f4-76f9-4a34-be6d-1bef61e77c8f\") " pod="openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s" Jan 23 16:33:24 crc kubenswrapper[4718]: I0123 16:33:24.681304 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c465f3f4-76f9-4a34-be6d-1bef61e77c8f-util\") pod \"ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s\" (UID: \"c465f3f4-76f9-4a34-be6d-1bef61e77c8f\") " pod="openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s" Jan 23 16:33:24 crc kubenswrapper[4718]: I0123 16:33:24.783362 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djmvx\" (UniqueName: \"kubernetes.io/projected/c465f3f4-76f9-4a34-be6d-1bef61e77c8f-kube-api-access-djmvx\") pod \"ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s\" (UID: \"c465f3f4-76f9-4a34-be6d-1bef61e77c8f\") " pod="openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s" Jan 23 16:33:24 crc kubenswrapper[4718]: I0123 16:33:24.783514 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c465f3f4-76f9-4a34-be6d-1bef61e77c8f-util\") pod \"ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s\" (UID: \"c465f3f4-76f9-4a34-be6d-1bef61e77c8f\") " pod="openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s" Jan 23 16:33:24 crc kubenswrapper[4718]: I0123 16:33:24.783613 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c465f3f4-76f9-4a34-be6d-1bef61e77c8f-bundle\") pod \"ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s\" (UID: \"c465f3f4-76f9-4a34-be6d-1bef61e77c8f\") " pod="openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s" Jan 23 16:33:24 crc kubenswrapper[4718]: I0123 16:33:24.784665 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c465f3f4-76f9-4a34-be6d-1bef61e77c8f-util\") pod \"ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s\" (UID: \"c465f3f4-76f9-4a34-be6d-1bef61e77c8f\") " pod="openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s" Jan 23 16:33:24 crc kubenswrapper[4718]: I0123 16:33:24.784751 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c465f3f4-76f9-4a34-be6d-1bef61e77c8f-bundle\") pod \"ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s\" (UID: \"c465f3f4-76f9-4a34-be6d-1bef61e77c8f\") " pod="openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s" Jan 23 16:33:24 crc kubenswrapper[4718]: I0123 16:33:24.818172 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djmvx\" (UniqueName: \"kubernetes.io/projected/c465f3f4-76f9-4a34-be6d-1bef61e77c8f-kube-api-access-djmvx\") pod \"ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s\" (UID: \"c465f3f4-76f9-4a34-be6d-1bef61e77c8f\") " pod="openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s" Jan 23 16:33:24 crc kubenswrapper[4718]: I0123 16:33:24.920434 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s" Jan 23 16:33:25 crc kubenswrapper[4718]: I0123 16:33:25.469413 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s"] Jan 23 16:33:25 crc kubenswrapper[4718]: W0123 16:33:25.474958 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc465f3f4_76f9_4a34_be6d_1bef61e77c8f.slice/crio-42dc3c32e7b200adec1c3971d4a03845f6fe86ee74f3aef24d9eb5905eb24462 WatchSource:0}: Error finding container 42dc3c32e7b200adec1c3971d4a03845f6fe86ee74f3aef24d9eb5905eb24462: Status 404 returned error can't find the container with id 42dc3c32e7b200adec1c3971d4a03845f6fe86ee74f3aef24d9eb5905eb24462 Jan 23 16:33:26 crc kubenswrapper[4718]: I0123 16:33:26.395206 4718 generic.go:334] "Generic (PLEG): container finished" podID="c465f3f4-76f9-4a34-be6d-1bef61e77c8f" containerID="3c902c5b5a039be5629f07f5ae8d69883951fce1278c06e3b0ca1ec9361d9f75" exitCode=0 Jan 23 16:33:26 crc kubenswrapper[4718]: I0123 16:33:26.395255 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s" event={"ID":"c465f3f4-76f9-4a34-be6d-1bef61e77c8f","Type":"ContainerDied","Data":"3c902c5b5a039be5629f07f5ae8d69883951fce1278c06e3b0ca1ec9361d9f75"} Jan 23 16:33:26 crc kubenswrapper[4718]: I0123 16:33:26.395477 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s" event={"ID":"c465f3f4-76f9-4a34-be6d-1bef61e77c8f","Type":"ContainerStarted","Data":"42dc3c32e7b200adec1c3971d4a03845f6fe86ee74f3aef24d9eb5905eb24462"} Jan 23 16:33:27 crc kubenswrapper[4718]: I0123 16:33:27.406212 4718 generic.go:334] "Generic (PLEG): container finished" podID="c465f3f4-76f9-4a34-be6d-1bef61e77c8f" containerID="f9009ceb14123d3491b866d3eee50323fc651b10dfe999b5b908ed2124740f30" exitCode=0 Jan 23 16:33:27 crc kubenswrapper[4718]: I0123 16:33:27.406387 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s" event={"ID":"c465f3f4-76f9-4a34-be6d-1bef61e77c8f","Type":"ContainerDied","Data":"f9009ceb14123d3491b866d3eee50323fc651b10dfe999b5b908ed2124740f30"} Jan 23 16:33:28 crc kubenswrapper[4718]: I0123 16:33:28.418117 4718 generic.go:334] "Generic (PLEG): container finished" podID="c465f3f4-76f9-4a34-be6d-1bef61e77c8f" containerID="a68306927c403034a564e3ab54be0cc1a86cd63521db290de9b90798e6d7b730" exitCode=0 Jan 23 16:33:28 crc kubenswrapper[4718]: I0123 16:33:28.418188 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s" event={"ID":"c465f3f4-76f9-4a34-be6d-1bef61e77c8f","Type":"ContainerDied","Data":"a68306927c403034a564e3ab54be0cc1a86cd63521db290de9b90798e6d7b730"} Jan 23 16:33:28 crc kubenswrapper[4718]: I0123 16:33:28.876240 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:33:28 crc kubenswrapper[4718]: I0123 16:33:28.876309 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:33:29 crc kubenswrapper[4718]: I0123 16:33:29.786410 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s" Jan 23 16:33:29 crc kubenswrapper[4718]: I0123 16:33:29.892503 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c465f3f4-76f9-4a34-be6d-1bef61e77c8f-util\") pod \"c465f3f4-76f9-4a34-be6d-1bef61e77c8f\" (UID: \"c465f3f4-76f9-4a34-be6d-1bef61e77c8f\") " Jan 23 16:33:29 crc kubenswrapper[4718]: I0123 16:33:29.892677 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djmvx\" (UniqueName: \"kubernetes.io/projected/c465f3f4-76f9-4a34-be6d-1bef61e77c8f-kube-api-access-djmvx\") pod \"c465f3f4-76f9-4a34-be6d-1bef61e77c8f\" (UID: \"c465f3f4-76f9-4a34-be6d-1bef61e77c8f\") " Jan 23 16:33:29 crc kubenswrapper[4718]: I0123 16:33:29.892790 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c465f3f4-76f9-4a34-be6d-1bef61e77c8f-bundle\") pod \"c465f3f4-76f9-4a34-be6d-1bef61e77c8f\" (UID: \"c465f3f4-76f9-4a34-be6d-1bef61e77c8f\") " Jan 23 16:33:29 crc kubenswrapper[4718]: I0123 16:33:29.904922 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c465f3f4-76f9-4a34-be6d-1bef61e77c8f-kube-api-access-djmvx" (OuterVolumeSpecName: "kube-api-access-djmvx") pod "c465f3f4-76f9-4a34-be6d-1bef61e77c8f" (UID: "c465f3f4-76f9-4a34-be6d-1bef61e77c8f"). InnerVolumeSpecName "kube-api-access-djmvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:33:29 crc kubenswrapper[4718]: I0123 16:33:29.905618 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c465f3f4-76f9-4a34-be6d-1bef61e77c8f-bundle" (OuterVolumeSpecName: "bundle") pod "c465f3f4-76f9-4a34-be6d-1bef61e77c8f" (UID: "c465f3f4-76f9-4a34-be6d-1bef61e77c8f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:33:29 crc kubenswrapper[4718]: I0123 16:33:29.928038 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c465f3f4-76f9-4a34-be6d-1bef61e77c8f-util" (OuterVolumeSpecName: "util") pod "c465f3f4-76f9-4a34-be6d-1bef61e77c8f" (UID: "c465f3f4-76f9-4a34-be6d-1bef61e77c8f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:33:29 crc kubenswrapper[4718]: I0123 16:33:29.995008 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djmvx\" (UniqueName: \"kubernetes.io/projected/c465f3f4-76f9-4a34-be6d-1bef61e77c8f-kube-api-access-djmvx\") on node \"crc\" DevicePath \"\"" Jan 23 16:33:29 crc kubenswrapper[4718]: I0123 16:33:29.995054 4718 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c465f3f4-76f9-4a34-be6d-1bef61e77c8f-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:33:29 crc kubenswrapper[4718]: I0123 16:33:29.995063 4718 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c465f3f4-76f9-4a34-be6d-1bef61e77c8f-util\") on node \"crc\" DevicePath \"\"" Jan 23 16:33:30 crc kubenswrapper[4718]: I0123 16:33:30.443777 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s" event={"ID":"c465f3f4-76f9-4a34-be6d-1bef61e77c8f","Type":"ContainerDied","Data":"42dc3c32e7b200adec1c3971d4a03845f6fe86ee74f3aef24d9eb5905eb24462"} Jan 23 16:33:30 crc kubenswrapper[4718]: I0123 16:33:30.443830 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42dc3c32e7b200adec1c3971d4a03845f6fe86ee74f3aef24d9eb5905eb24462" Jan 23 16:33:30 crc kubenswrapper[4718]: I0123 16:33:30.443848 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s" Jan 23 16:33:36 crc kubenswrapper[4718]: I0123 16:33:36.740962 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7549f75f-929gl"] Jan 23 16:33:36 crc kubenswrapper[4718]: E0123 16:33:36.741978 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c465f3f4-76f9-4a34-be6d-1bef61e77c8f" containerName="pull" Jan 23 16:33:36 crc kubenswrapper[4718]: I0123 16:33:36.741994 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c465f3f4-76f9-4a34-be6d-1bef61e77c8f" containerName="pull" Jan 23 16:33:36 crc kubenswrapper[4718]: E0123 16:33:36.742015 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c465f3f4-76f9-4a34-be6d-1bef61e77c8f" containerName="util" Jan 23 16:33:36 crc kubenswrapper[4718]: I0123 16:33:36.742023 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c465f3f4-76f9-4a34-be6d-1bef61e77c8f" containerName="util" Jan 23 16:33:36 crc kubenswrapper[4718]: E0123 16:33:36.742041 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c465f3f4-76f9-4a34-be6d-1bef61e77c8f" containerName="extract" Jan 23 16:33:36 crc kubenswrapper[4718]: I0123 16:33:36.742050 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c465f3f4-76f9-4a34-be6d-1bef61e77c8f" containerName="extract" Jan 23 16:33:36 crc kubenswrapper[4718]: I0123 16:33:36.742235 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c465f3f4-76f9-4a34-be6d-1bef61e77c8f" containerName="extract" Jan 23 16:33:36 crc kubenswrapper[4718]: I0123 16:33:36.743007 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7549f75f-929gl" Jan 23 16:33:36 crc kubenswrapper[4718]: I0123 16:33:36.745779 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-m8pr9" Jan 23 16:33:36 crc kubenswrapper[4718]: I0123 16:33:36.779752 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7549f75f-929gl"] Jan 23 16:33:36 crc kubenswrapper[4718]: I0123 16:33:36.808431 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw5wc\" (UniqueName: \"kubernetes.io/projected/0c42f381-34a5-4913-90b0-0bbc4e0810fd-kube-api-access-xw5wc\") pod \"openstack-operator-controller-init-7549f75f-929gl\" (UID: \"0c42f381-34a5-4913-90b0-0bbc4e0810fd\") " pod="openstack-operators/openstack-operator-controller-init-7549f75f-929gl" Jan 23 16:33:36 crc kubenswrapper[4718]: I0123 16:33:36.921168 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw5wc\" (UniqueName: \"kubernetes.io/projected/0c42f381-34a5-4913-90b0-0bbc4e0810fd-kube-api-access-xw5wc\") pod \"openstack-operator-controller-init-7549f75f-929gl\" (UID: \"0c42f381-34a5-4913-90b0-0bbc4e0810fd\") " pod="openstack-operators/openstack-operator-controller-init-7549f75f-929gl" Jan 23 16:33:36 crc kubenswrapper[4718]: I0123 16:33:36.962916 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw5wc\" (UniqueName: \"kubernetes.io/projected/0c42f381-34a5-4913-90b0-0bbc4e0810fd-kube-api-access-xw5wc\") pod \"openstack-operator-controller-init-7549f75f-929gl\" (UID: \"0c42f381-34a5-4913-90b0-0bbc4e0810fd\") " pod="openstack-operators/openstack-operator-controller-init-7549f75f-929gl" Jan 23 16:33:37 crc kubenswrapper[4718]: I0123 16:33:37.061897 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7549f75f-929gl" Jan 23 16:33:37 crc kubenswrapper[4718]: I0123 16:33:37.533078 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7549f75f-929gl"] Jan 23 16:33:37 crc kubenswrapper[4718]: W0123 16:33:37.546101 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c42f381_34a5_4913_90b0_0bbc4e0810fd.slice/crio-3e888046b917de78e5dad84eacaa5973c0ae0799ec203c1818eb5b3f45fcca49 WatchSource:0}: Error finding container 3e888046b917de78e5dad84eacaa5973c0ae0799ec203c1818eb5b3f45fcca49: Status 404 returned error can't find the container with id 3e888046b917de78e5dad84eacaa5973c0ae0799ec203c1818eb5b3f45fcca49 Jan 23 16:33:38 crc kubenswrapper[4718]: I0123 16:33:38.537624 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7549f75f-929gl" event={"ID":"0c42f381-34a5-4913-90b0-0bbc4e0810fd","Type":"ContainerStarted","Data":"3e888046b917de78e5dad84eacaa5973c0ae0799ec203c1818eb5b3f45fcca49"} Jan 23 16:33:43 crc kubenswrapper[4718]: I0123 16:33:43.587129 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7549f75f-929gl" event={"ID":"0c42f381-34a5-4913-90b0-0bbc4e0810fd","Type":"ContainerStarted","Data":"1d5187e91a37d86972e34bebc9f92b8930cf6a5b395be479156ed1cfaf32977d"} Jan 23 16:33:43 crc kubenswrapper[4718]: I0123 16:33:43.588065 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7549f75f-929gl" Jan 23 16:33:43 crc kubenswrapper[4718]: I0123 16:33:43.643015 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7549f75f-929gl" podStartSLOduration=2.59091983 podStartE2EDuration="7.642983062s" podCreationTimestamp="2026-01-23 16:33:36 +0000 UTC" firstStartedPulling="2026-01-23 16:33:37.549195149 +0000 UTC m=+1018.696437140" lastFinishedPulling="2026-01-23 16:33:42.601258381 +0000 UTC m=+1023.748500372" observedRunningTime="2026-01-23 16:33:43.629127715 +0000 UTC m=+1024.776369746" watchObservedRunningTime="2026-01-23 16:33:43.642983062 +0000 UTC m=+1024.790225093" Jan 23 16:33:47 crc kubenswrapper[4718]: I0123 16:33:47.068086 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7549f75f-929gl" Jan 23 16:33:58 crc kubenswrapper[4718]: I0123 16:33:58.876098 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:33:58 crc kubenswrapper[4718]: I0123 16:33:58.876860 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.095662 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.097114 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.100763 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-ld5kt" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.107839 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.116747 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-9r22c"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.117834 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9r22c" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.120830 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-t2tz8" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.158792 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-9r22c"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.158851 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.164543 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.173405 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-kv7w5" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.197699 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl4rq\" (UniqueName: \"kubernetes.io/projected/858bcd70-b537-4da9-8ca9-27c1724ece99-kube-api-access-nl4rq\") pod \"barbican-operator-controller-manager-7f86f8796f-62wgc\" (UID: \"858bcd70-b537-4da9-8ca9-27c1724ece99\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.201591 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.209038 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.218327 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.220577 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-2lgxh" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.238584 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.247418 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.250014 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-pnx7f" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.280617 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.299638 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl4rq\" (UniqueName: \"kubernetes.io/projected/858bcd70-b537-4da9-8ca9-27c1724ece99-kube-api-access-nl4rq\") pod \"barbican-operator-controller-manager-7f86f8796f-62wgc\" (UID: \"858bcd70-b537-4da9-8ca9-27c1724ece99\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.299777 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kpch\" (UniqueName: \"kubernetes.io/projected/a6012879-2e20-485d-829f-3a9ec3e5bcb1-kube-api-access-9kpch\") pod \"cinder-operator-controller-manager-69cf5d4557-9r22c\" (UID: \"a6012879-2e20-485d-829f-3a9ec3e5bcb1\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9r22c" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.299847 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjs5j\" (UniqueName: \"kubernetes.io/projected/8d9099e2-7f4f-42d8-8e76-d2d8347a1514-kube-api-access-gjs5j\") pod \"designate-operator-controller-manager-b45d7bf98-zs7zk\" (UID: \"8d9099e2-7f4f-42d8-8e76-d2d8347a1514\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.316162 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.353837 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.355058 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl4rq\" (UniqueName: \"kubernetes.io/projected/858bcd70-b537-4da9-8ca9-27c1724ece99-kube-api-access-nl4rq\") pod \"barbican-operator-controller-manager-7f86f8796f-62wgc\" (UID: \"858bcd70-b537-4da9-8ca9-27c1724ece99\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.364813 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.365622 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.367184 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-qfl6x" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.383702 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.384891 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.389038 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-q8964" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.395525 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.401060 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjs5j\" (UniqueName: \"kubernetes.io/projected/8d9099e2-7f4f-42d8-8e76-d2d8347a1514-kube-api-access-gjs5j\") pod \"designate-operator-controller-manager-b45d7bf98-zs7zk\" (UID: \"8d9099e2-7f4f-42d8-8e76-d2d8347a1514\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.401267 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq5pv\" (UniqueName: \"kubernetes.io/projected/d91ca0c9-05fb-4e8b-9581-f2c3d025c0e2-kube-api-access-nq5pv\") pod \"heat-operator-controller-manager-594c8c9d5d-dfwk2\" (UID: \"d91ca0c9-05fb-4e8b-9581-f2c3d025c0e2\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.401354 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr5bs\" (UniqueName: \"kubernetes.io/projected/9e8950bc-8213-40eb-9bb7-2e1a8c66b57b-kube-api-access-vr5bs\") pod \"glance-operator-controller-manager-78fdd796fd-jjplg\" (UID: \"9e8950bc-8213-40eb-9bb7-2e1a8c66b57b\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.401401 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kpch\" (UniqueName: \"kubernetes.io/projected/a6012879-2e20-485d-829f-3a9ec3e5bcb1-kube-api-access-9kpch\") pod \"cinder-operator-controller-manager-69cf5d4557-9r22c\" (UID: \"a6012879-2e20-485d-829f-3a9ec3e5bcb1\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9r22c" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.401514 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.402909 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.407610 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.407859 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-hlgdw" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.413322 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.420027 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.420597 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.421931 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.429155 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-h87bz" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.439962 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjs5j\" (UniqueName: \"kubernetes.io/projected/8d9099e2-7f4f-42d8-8e76-d2d8347a1514-kube-api-access-gjs5j\") pod \"designate-operator-controller-manager-b45d7bf98-zs7zk\" (UID: \"8d9099e2-7f4f-42d8-8e76-d2d8347a1514\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.440096 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.445410 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.445831 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kpch\" (UniqueName: \"kubernetes.io/projected/a6012879-2e20-485d-829f-3a9ec3e5bcb1-kube-api-access-9kpch\") pod \"cinder-operator-controller-manager-69cf5d4557-9r22c\" (UID: \"a6012879-2e20-485d-829f-3a9ec3e5bcb1\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9r22c" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.446209 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9r22c" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.446815 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.460170 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-wmzzq" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.490706 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.491900 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.498027 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-54fp2" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.499036 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.505398 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxb7b\" (UniqueName: \"kubernetes.io/projected/06df7a47-9233-4957-936e-27f58aeb0000-kube-api-access-fxb7b\") pod \"infra-operator-controller-manager-58749ffdfb-4tm4n\" (UID: \"06df7a47-9233-4957-936e-27f58aeb0000\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.505464 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lr5q\" (UniqueName: \"kubernetes.io/projected/d869ec7c-ddd9-4e17-9154-a793539a2a00-kube-api-access-7lr5q\") pod \"horizon-operator-controller-manager-77d5c5b54f-sr2hw\" (UID: \"d869ec7c-ddd9-4e17-9154-a793539a2a00\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.505490 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrzf4\" (UniqueName: \"kubernetes.io/projected/16e17ade-97be-48d4-83d4-7ac385174edd-kube-api-access-jrzf4\") pod \"ironic-operator-controller-manager-598f7747c9-t8fsk\" (UID: \"16e17ade-97be-48d4-83d4-7ac385174edd\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.505536 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq5pv\" (UniqueName: \"kubernetes.io/projected/d91ca0c9-05fb-4e8b-9581-f2c3d025c0e2-kube-api-access-nq5pv\") pod \"heat-operator-controller-manager-594c8c9d5d-dfwk2\" (UID: \"d91ca0c9-05fb-4e8b-9581-f2c3d025c0e2\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.505564 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr5bs\" (UniqueName: \"kubernetes.io/projected/9e8950bc-8213-40eb-9bb7-2e1a8c66b57b-kube-api-access-vr5bs\") pod \"glance-operator-controller-manager-78fdd796fd-jjplg\" (UID: \"9e8950bc-8213-40eb-9bb7-2e1a8c66b57b\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.505595 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lcb9\" (UniqueName: \"kubernetes.io/projected/50178034-67cf-4f8d-89bb-788c8a73a72a-kube-api-access-8lcb9\") pod \"keystone-operator-controller-manager-b8b6d4659-nwpcs\" (UID: \"50178034-67cf-4f8d-89bb-788c8a73a72a\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.505642 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06df7a47-9233-4957-936e-27f58aeb0000-cert\") pod \"infra-operator-controller-manager-58749ffdfb-4tm4n\" (UID: \"06df7a47-9233-4957-936e-27f58aeb0000\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.508342 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.528497 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq5pv\" (UniqueName: \"kubernetes.io/projected/d91ca0c9-05fb-4e8b-9581-f2c3d025c0e2-kube-api-access-nq5pv\") pod \"heat-operator-controller-manager-594c8c9d5d-dfwk2\" (UID: \"d91ca0c9-05fb-4e8b-9581-f2c3d025c0e2\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.531196 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr5bs\" (UniqueName: \"kubernetes.io/projected/9e8950bc-8213-40eb-9bb7-2e1a8c66b57b-kube-api-access-vr5bs\") pod \"glance-operator-controller-manager-78fdd796fd-jjplg\" (UID: \"9e8950bc-8213-40eb-9bb7-2e1a8c66b57b\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.531248 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-sr4hx"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.532352 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sr4hx" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.532475 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.545389 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-sr4hx"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.546139 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.547054 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-kr2q8" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.557412 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.565767 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.579735 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-t7t47" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.599082 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.607973 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.608753 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxb7b\" (UniqueName: \"kubernetes.io/projected/06df7a47-9233-4957-936e-27f58aeb0000-kube-api-access-fxb7b\") pod \"infra-operator-controller-manager-58749ffdfb-4tm4n\" (UID: \"06df7a47-9233-4957-936e-27f58aeb0000\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.608868 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lr5q\" (UniqueName: \"kubernetes.io/projected/d869ec7c-ddd9-4e17-9154-a793539a2a00-kube-api-access-7lr5q\") pod \"horizon-operator-controller-manager-77d5c5b54f-sr2hw\" (UID: \"d869ec7c-ddd9-4e17-9154-a793539a2a00\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.608934 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrzf4\" (UniqueName: \"kubernetes.io/projected/16e17ade-97be-48d4-83d4-7ac385174edd-kube-api-access-jrzf4\") pod \"ironic-operator-controller-manager-598f7747c9-t8fsk\" (UID: \"16e17ade-97be-48d4-83d4-7ac385174edd\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.608994 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpqgj\" (UniqueName: \"kubernetes.io/projected/32d58a3a-df31-492e-a2c2-2f5ca31c5f90-kube-api-access-jpqgj\") pod \"manila-operator-controller-manager-78c6999f6f-jbxnk\" (UID: \"32d58a3a-df31-492e-a2c2-2f5ca31c5f90\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.609094 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j64xc\" (UniqueName: \"kubernetes.io/projected/8e29e3d6-21d7-4a1a-832e-f831d884fd00-kube-api-access-j64xc\") pod \"neutron-operator-controller-manager-78d58447c5-sr4hx\" (UID: \"8e29e3d6-21d7-4a1a-832e-f831d884fd00\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sr4hx" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.609139 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lcb9\" (UniqueName: \"kubernetes.io/projected/50178034-67cf-4f8d-89bb-788c8a73a72a-kube-api-access-8lcb9\") pod \"keystone-operator-controller-manager-b8b6d4659-nwpcs\" (UID: \"50178034-67cf-4f8d-89bb-788c8a73a72a\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.609259 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06df7a47-9233-4957-936e-27f58aeb0000-cert\") pod \"infra-operator-controller-manager-58749ffdfb-4tm4n\" (UID: \"06df7a47-9233-4957-936e-27f58aeb0000\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.609308 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxlng\" (UniqueName: \"kubernetes.io/projected/9a95eff5-116c-4141-bee6-5bda12f21e11-kube-api-access-hxlng\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l\" (UID: \"9a95eff5-116c-4141-bee6-5bda12f21e11\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l" Jan 23 16:34:07 crc kubenswrapper[4718]: E0123 16:34:07.609516 4718 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 16:34:07 crc kubenswrapper[4718]: E0123 16:34:07.609577 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06df7a47-9233-4957-936e-27f58aeb0000-cert podName:06df7a47-9233-4957-936e-27f58aeb0000 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:08.109553324 +0000 UTC m=+1049.256795435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/06df7a47-9233-4957-936e-27f58aeb0000-cert") pod "infra-operator-controller-manager-58749ffdfb-4tm4n" (UID: "06df7a47-9233-4957-936e-27f58aeb0000") : secret "infra-operator-webhook-server-cert" not found Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.627525 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lcb9\" (UniqueName: \"kubernetes.io/projected/50178034-67cf-4f8d-89bb-788c8a73a72a-kube-api-access-8lcb9\") pod \"keystone-operator-controller-manager-b8b6d4659-nwpcs\" (UID: \"50178034-67cf-4f8d-89bb-788c8a73a72a\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.642374 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lr5q\" (UniqueName: \"kubernetes.io/projected/d869ec7c-ddd9-4e17-9154-a793539a2a00-kube-api-access-7lr5q\") pod \"horizon-operator-controller-manager-77d5c5b54f-sr2hw\" (UID: \"d869ec7c-ddd9-4e17-9154-a793539a2a00\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.645452 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrzf4\" (UniqueName: \"kubernetes.io/projected/16e17ade-97be-48d4-83d4-7ac385174edd-kube-api-access-jrzf4\") pod \"ironic-operator-controller-manager-598f7747c9-t8fsk\" (UID: \"16e17ade-97be-48d4-83d4-7ac385174edd\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.649362 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.649520 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxb7b\" (UniqueName: \"kubernetes.io/projected/06df7a47-9233-4957-936e-27f58aeb0000-kube-api-access-fxb7b\") pod \"infra-operator-controller-manager-58749ffdfb-4tm4n\" (UID: \"06df7a47-9233-4957-936e-27f58aeb0000\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.653677 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.681413 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-l9k92" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.700888 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.716549 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.718418 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j64xc\" (UniqueName: \"kubernetes.io/projected/8e29e3d6-21d7-4a1a-832e-f831d884fd00-kube-api-access-j64xc\") pod \"neutron-operator-controller-manager-78d58447c5-sr4hx\" (UID: \"8e29e3d6-21d7-4a1a-832e-f831d884fd00\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sr4hx" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.718515 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxlng\" (UniqueName: \"kubernetes.io/projected/9a95eff5-116c-4141-bee6-5bda12f21e11-kube-api-access-hxlng\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l\" (UID: \"9a95eff5-116c-4141-bee6-5bda12f21e11\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.718933 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5zg4\" (UniqueName: \"kubernetes.io/projected/ae7c1f40-90dd-441b-9dc5-608e1a503f4c-kube-api-access-b5zg4\") pod \"nova-operator-controller-manager-6b8bc8d87d-2m5hx\" (UID: \"ae7c1f40-90dd-441b-9dc5-608e1a503f4c\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.718968 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpqgj\" (UniqueName: \"kubernetes.io/projected/32d58a3a-df31-492e-a2c2-2f5ca31c5f90-kube-api-access-jpqgj\") pod \"manila-operator-controller-manager-78c6999f6f-jbxnk\" (UID: \"32d58a3a-df31-492e-a2c2-2f5ca31c5f90\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.731456 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.732821 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.737624 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.737901 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-qcxbs" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.744813 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j64xc\" (UniqueName: \"kubernetes.io/projected/8e29e3d6-21d7-4a1a-832e-f831d884fd00-kube-api-access-j64xc\") pod \"neutron-operator-controller-manager-78d58447c5-sr4hx\" (UID: \"8e29e3d6-21d7-4a1a-832e-f831d884fd00\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sr4hx" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.747308 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxlng\" (UniqueName: \"kubernetes.io/projected/9a95eff5-116c-4141-bee6-5bda12f21e11-kube-api-access-hxlng\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l\" (UID: \"9a95eff5-116c-4141-bee6-5bda12f21e11\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.747739 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.765720 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpqgj\" (UniqueName: \"kubernetes.io/projected/32d58a3a-df31-492e-a2c2-2f5ca31c5f90-kube-api-access-jpqgj\") pod \"manila-operator-controller-manager-78c6999f6f-jbxnk\" (UID: \"32d58a3a-df31-492e-a2c2-2f5ca31c5f90\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.799473 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-znjjw"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.801063 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-znjjw" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.812389 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-7hb77" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.821170 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.821356 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9xbp\" (UniqueName: \"kubernetes.io/projected/2062a379-6201-4835-8974-24befcfbf8e0-kube-api-access-j9xbp\") pod \"octavia-operator-controller-manager-7bd9774b6-kn2t8\" (UID: \"2062a379-6201-4835-8974-24befcfbf8e0\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.821399 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-248hk\" (UniqueName: \"kubernetes.io/projected/18395392-bb8d-49be-9b49-950d6f32b9f6-kube-api-access-248hk\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bhf98\" (UID: \"18395392-bb8d-49be-9b49-950d6f32b9f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.821467 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5zg4\" (UniqueName: \"kubernetes.io/projected/ae7c1f40-90dd-441b-9dc5-608e1a503f4c-kube-api-access-b5zg4\") pod \"nova-operator-controller-manager-6b8bc8d87d-2m5hx\" (UID: \"ae7c1f40-90dd-441b-9dc5-608e1a503f4c\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.821497 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18395392-bb8d-49be-9b49-950d6f32b9f6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bhf98\" (UID: \"18395392-bb8d-49be-9b49-950d6f32b9f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.860937 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5zg4\" (UniqueName: \"kubernetes.io/projected/ae7c1f40-90dd-441b-9dc5-608e1a503f4c-kube-api-access-b5zg4\") pod \"nova-operator-controller-manager-6b8bc8d87d-2m5hx\" (UID: \"ae7c1f40-90dd-441b-9dc5-608e1a503f4c\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.882533 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-znjjw"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.891288 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.923624 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.924008 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.925405 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.927738 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-v7rnj" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.928568 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18395392-bb8d-49be-9b49-950d6f32b9f6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bhf98\" (UID: \"18395392-bb8d-49be-9b49-950d6f32b9f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.928717 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ldvs\" (UniqueName: \"kubernetes.io/projected/0fe9ca7e-5763-4cba-afc1-94065f21f33e-kube-api-access-7ldvs\") pod \"ovn-operator-controller-manager-55db956ddc-znjjw\" (UID: \"0fe9ca7e-5763-4cba-afc1-94065f21f33e\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-znjjw" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.928749 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9xbp\" (UniqueName: \"kubernetes.io/projected/2062a379-6201-4835-8974-24befcfbf8e0-kube-api-access-j9xbp\") pod \"octavia-operator-controller-manager-7bd9774b6-kn2t8\" (UID: \"2062a379-6201-4835-8974-24befcfbf8e0\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.928807 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-248hk\" (UniqueName: \"kubernetes.io/projected/18395392-bb8d-49be-9b49-950d6f32b9f6-kube-api-access-248hk\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bhf98\" (UID: \"18395392-bb8d-49be-9b49-950d6f32b9f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" Jan 23 16:34:07 crc kubenswrapper[4718]: E0123 16:34:07.929257 4718 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 16:34:07 crc kubenswrapper[4718]: E0123 16:34:07.929312 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18395392-bb8d-49be-9b49-950d6f32b9f6-cert podName:18395392-bb8d-49be-9b49-950d6f32b9f6 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:08.429298241 +0000 UTC m=+1049.576540232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/18395392-bb8d-49be-9b49-950d6f32b9f6-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" (UID: "18395392-bb8d-49be-9b49-950d6f32b9f6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.948360 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.953253 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sr4hx" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.953308 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-248hk\" (UniqueName: \"kubernetes.io/projected/18395392-bb8d-49be-9b49-950d6f32b9f6-kube-api-access-248hk\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bhf98\" (UID: \"18395392-bb8d-49be-9b49-950d6f32b9f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.959023 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9xbp\" (UniqueName: \"kubernetes.io/projected/2062a379-6201-4835-8974-24befcfbf8e0-kube-api-access-j9xbp\") pod \"octavia-operator-controller-manager-7bd9774b6-kn2t8\" (UID: \"2062a379-6201-4835-8974-24befcfbf8e0\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.970933 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.984122 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t"] Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.986790 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.990155 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-sntfn" Jan 23 16:34:07 crc kubenswrapper[4718]: I0123 16:34:07.997496 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t"] Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.004286 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82"] Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.018815 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck"] Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.021980 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.027930 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dhwgn" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.034107 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.035228 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ldvs\" (UniqueName: \"kubernetes.io/projected/0fe9ca7e-5763-4cba-afc1-94065f21f33e-kube-api-access-7ldvs\") pod \"ovn-operator-controller-manager-55db956ddc-znjjw\" (UID: \"0fe9ca7e-5763-4cba-afc1-94065f21f33e\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-znjjw" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.035375 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsrht\" (UniqueName: \"kubernetes.io/projected/49cc2143-a384-436e-8eef-4d7474918177-kube-api-access-nsrht\") pod \"placement-operator-controller-manager-5d646b7d76-9kl82\" (UID: \"49cc2143-a384-436e-8eef-4d7474918177\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.050139 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck"] Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.063929 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ldvs\" (UniqueName: \"kubernetes.io/projected/0fe9ca7e-5763-4cba-afc1-94065f21f33e-kube-api-access-7ldvs\") pod \"ovn-operator-controller-manager-55db956ddc-znjjw\" (UID: \"0fe9ca7e-5763-4cba-afc1-94065f21f33e\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-znjjw" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.063996 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k"] Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.065326 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.067792 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-s47r2" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.076891 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k"] Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.130673 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4"] Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.131875 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.137384 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nnld\" (UniqueName: \"kubernetes.io/projected/7cd4d741-2a88-466f-a644-a1c6c62e521b-kube-api-access-2nnld\") pod \"swift-operator-controller-manager-547cbdb99f-87q6t\" (UID: \"7cd4d741-2a88-466f-a644-a1c6c62e521b\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.137441 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06df7a47-9233-4957-936e-27f58aeb0000-cert\") pod \"infra-operator-controller-manager-58749ffdfb-4tm4n\" (UID: \"06df7a47-9233-4957-936e-27f58aeb0000\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.137507 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsrht\" (UniqueName: \"kubernetes.io/projected/49cc2143-a384-436e-8eef-4d7474918177-kube-api-access-nsrht\") pod \"placement-operator-controller-manager-5d646b7d76-9kl82\" (UID: \"49cc2143-a384-436e-8eef-4d7474918177\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.137542 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsmwx\" (UniqueName: \"kubernetes.io/projected/3cfce3f5-1f59-43ae-aa99-2483cfb33806-kube-api-access-tsmwx\") pod \"test-operator-controller-manager-69797bbcbd-9vg4k\" (UID: \"3cfce3f5-1f59-43ae-aa99-2483cfb33806\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.137561 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kww8z\" (UniqueName: \"kubernetes.io/projected/f2fe0ff3-bfa2-4cc4-b85c-8bc89ca73078-kube-api-access-kww8z\") pod \"telemetry-operator-controller-manager-7c7754d696-xthck\" (UID: \"f2fe0ff3-bfa2-4cc4-b85c-8bc89ca73078\") " pod="openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck" Jan 23 16:34:08 crc kubenswrapper[4718]: E0123 16:34:08.137720 4718 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 16:34:08 crc kubenswrapper[4718]: E0123 16:34:08.137763 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06df7a47-9233-4957-936e-27f58aeb0000-cert podName:06df7a47-9233-4957-936e-27f58aeb0000 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:09.137748563 +0000 UTC m=+1050.284990554 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/06df7a47-9233-4957-936e-27f58aeb0000-cert") pod "infra-operator-controller-manager-58749ffdfb-4tm4n" (UID: "06df7a47-9233-4957-936e-27f58aeb0000") : secret "infra-operator-webhook-server-cert" not found Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.142239 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-pxssj" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.156998 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4"] Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.163143 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsrht\" (UniqueName: \"kubernetes.io/projected/49cc2143-a384-436e-8eef-4d7474918177-kube-api-access-nsrht\") pod \"placement-operator-controller-manager-5d646b7d76-9kl82\" (UID: \"49cc2143-a384-436e-8eef-4d7474918177\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.163647 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-znjjw" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.221677 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth"] Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.223001 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.232957 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-dr7ll" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.233925 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.238952 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hpd6\" (UniqueName: \"kubernetes.io/projected/addb55c8-8565-42c2-84d2-7ee7e8693a3a-kube-api-access-6hpd6\") pod \"watcher-operator-controller-manager-6d9458688d-5lxl4\" (UID: \"addb55c8-8565-42c2-84d2-7ee7e8693a3a\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.239010 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nnld\" (UniqueName: \"kubernetes.io/projected/7cd4d741-2a88-466f-a644-a1c6c62e521b-kube-api-access-2nnld\") pod \"swift-operator-controller-manager-547cbdb99f-87q6t\" (UID: \"7cd4d741-2a88-466f-a644-a1c6c62e521b\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.239221 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsmwx\" (UniqueName: \"kubernetes.io/projected/3cfce3f5-1f59-43ae-aa99-2483cfb33806-kube-api-access-tsmwx\") pod \"test-operator-controller-manager-69797bbcbd-9vg4k\" (UID: \"3cfce3f5-1f59-43ae-aa99-2483cfb33806\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.239240 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kww8z\" (UniqueName: \"kubernetes.io/projected/f2fe0ff3-bfa2-4cc4-b85c-8bc89ca73078-kube-api-access-kww8z\") pod \"telemetry-operator-controller-manager-7c7754d696-xthck\" (UID: \"f2fe0ff3-bfa2-4cc4-b85c-8bc89ca73078\") " pod="openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.239749 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.242120 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth"] Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.261365 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.267230 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64ktf"] Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.272253 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64ktf" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.275050 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nnld\" (UniqueName: \"kubernetes.io/projected/7cd4d741-2a88-466f-a644-a1c6c62e521b-kube-api-access-2nnld\") pod \"swift-operator-controller-manager-547cbdb99f-87q6t\" (UID: \"7cd4d741-2a88-466f-a644-a1c6c62e521b\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.287030 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-m4xz2" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.300533 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsmwx\" (UniqueName: \"kubernetes.io/projected/3cfce3f5-1f59-43ae-aa99-2483cfb33806-kube-api-access-tsmwx\") pod \"test-operator-controller-manager-69797bbcbd-9vg4k\" (UID: \"3cfce3f5-1f59-43ae-aa99-2483cfb33806\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.301076 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kww8z\" (UniqueName: \"kubernetes.io/projected/f2fe0ff3-bfa2-4cc4-b85c-8bc89ca73078-kube-api-access-kww8z\") pod \"telemetry-operator-controller-manager-7c7754d696-xthck\" (UID: \"f2fe0ff3-bfa2-4cc4-b85c-8bc89ca73078\") " pod="openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.320266 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64ktf"] Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.329218 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.342482 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.342529 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n499b\" (UniqueName: \"kubernetes.io/projected/369053b2-11b0-4e19-a77d-3ea9cf595039-kube-api-access-n499b\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.342615 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-metrics-certs\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.342761 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hpd6\" (UniqueName: \"kubernetes.io/projected/addb55c8-8565-42c2-84d2-7ee7e8693a3a-kube-api-access-6hpd6\") pod \"watcher-operator-controller-manager-6d9458688d-5lxl4\" (UID: \"addb55c8-8565-42c2-84d2-7ee7e8693a3a\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.342806 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srvh7\" (UniqueName: \"kubernetes.io/projected/235aadec-9416-469c-8455-64dd1bc82a08-kube-api-access-srvh7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-64ktf\" (UID: \"235aadec-9416-469c-8455-64dd1bc82a08\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64ktf" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.376422 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hpd6\" (UniqueName: \"kubernetes.io/projected/addb55c8-8565-42c2-84d2-7ee7e8693a3a-kube-api-access-6hpd6\") pod \"watcher-operator-controller-manager-6d9458688d-5lxl4\" (UID: \"addb55c8-8565-42c2-84d2-7ee7e8693a3a\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.379291 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.401846 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc"] Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.404536 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.465177 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-metrics-certs\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.465259 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18395392-bb8d-49be-9b49-950d6f32b9f6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bhf98\" (UID: \"18395392-bb8d-49be-9b49-950d6f32b9f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.465318 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srvh7\" (UniqueName: \"kubernetes.io/projected/235aadec-9416-469c-8455-64dd1bc82a08-kube-api-access-srvh7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-64ktf\" (UID: \"235aadec-9416-469c-8455-64dd1bc82a08\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64ktf" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.465410 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.465442 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n499b\" (UniqueName: \"kubernetes.io/projected/369053b2-11b0-4e19-a77d-3ea9cf595039-kube-api-access-n499b\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:08 crc kubenswrapper[4718]: E0123 16:34:08.467690 4718 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 16:34:08 crc kubenswrapper[4718]: E0123 16:34:08.467738 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-metrics-certs podName:369053b2-11b0-4e19-a77d-3ea9cf595039 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:08.967724438 +0000 UTC m=+1050.114966429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-metrics-certs") pod "openstack-operator-controller-manager-74c6db8f6f-rkhth" (UID: "369053b2-11b0-4e19-a77d-3ea9cf595039") : secret "metrics-server-cert" not found Jan 23 16:34:08 crc kubenswrapper[4718]: E0123 16:34:08.467930 4718 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 16:34:08 crc kubenswrapper[4718]: E0123 16:34:08.467958 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18395392-bb8d-49be-9b49-950d6f32b9f6-cert podName:18395392-bb8d-49be-9b49-950d6f32b9f6 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:09.467949554 +0000 UTC m=+1050.615191545 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/18395392-bb8d-49be-9b49-950d6f32b9f6-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" (UID: "18395392-bb8d-49be-9b49-950d6f32b9f6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 16:34:08 crc kubenswrapper[4718]: E0123 16:34:08.468127 4718 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 16:34:08 crc kubenswrapper[4718]: E0123 16:34:08.468157 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs podName:369053b2-11b0-4e19-a77d-3ea9cf595039 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:08.96814684 +0000 UTC m=+1050.115388831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs") pod "openstack-operator-controller-manager-74c6db8f6f-rkhth" (UID: "369053b2-11b0-4e19-a77d-3ea9cf595039") : secret "webhook-server-cert" not found Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.508111 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.509589 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srvh7\" (UniqueName: \"kubernetes.io/projected/235aadec-9416-469c-8455-64dd1bc82a08-kube-api-access-srvh7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-64ktf\" (UID: \"235aadec-9416-469c-8455-64dd1bc82a08\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64ktf" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.512008 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n499b\" (UniqueName: \"kubernetes.io/projected/369053b2-11b0-4e19-a77d-3ea9cf595039-kube-api-access-n499b\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.513062 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2"] Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.533202 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk"] Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.544381 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-9r22c"] Jan 23 16:34:08 crc kubenswrapper[4718]: W0123 16:34:08.608788 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d9099e2_7f4f_42d8_8e76_d2d8347a1514.slice/crio-65723512e47c95734493bdac51d40180273b306271616af9efcf4599b698d882 WatchSource:0}: Error finding container 65723512e47c95734493bdac51d40180273b306271616af9efcf4599b698d882: Status 404 returned error can't find the container with id 65723512e47c95734493bdac51d40180273b306271616af9efcf4599b698d882 Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.655297 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64ktf" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.718154 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg"] Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.848014 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9r22c" event={"ID":"a6012879-2e20-485d-829f-3a9ec3e5bcb1","Type":"ContainerStarted","Data":"4dd4d30003d8eb0ce54cf8baadd5d09a7017a499cb7c9344319b00a15d1b79bf"} Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.850272 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2" event={"ID":"d91ca0c9-05fb-4e8b-9581-f2c3d025c0e2","Type":"ContainerStarted","Data":"afb1e93bf8c5b7f8a3fdece561b3933a36565ca55e955846d8fd48b74c6cbb78"} Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.852281 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg" event={"ID":"9e8950bc-8213-40eb-9bb7-2e1a8c66b57b","Type":"ContainerStarted","Data":"d59f3c0de03527dfa2529d24091ccb62261a2e89c86f1a521a59c1f2916bc85c"} Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.860826 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk" event={"ID":"8d9099e2-7f4f-42d8-8e76-d2d8347a1514","Type":"ContainerStarted","Data":"65723512e47c95734493bdac51d40180273b306271616af9efcf4599b698d882"} Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.862842 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc" event={"ID":"858bcd70-b537-4da9-8ca9-27c1724ece99","Type":"ContainerStarted","Data":"32b53a0f3d25f057f937b98b76e03a98c1c4ae919d13c8a111251b4eb4efab37"} Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.977646 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk"] Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.978614 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.978759 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-metrics-certs\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:08 crc kubenswrapper[4718]: E0123 16:34:08.978957 4718 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 16:34:08 crc kubenswrapper[4718]: E0123 16:34:08.979049 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-metrics-certs podName:369053b2-11b0-4e19-a77d-3ea9cf595039 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:09.979018339 +0000 UTC m=+1051.126260330 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-metrics-certs") pod "openstack-operator-controller-manager-74c6db8f6f-rkhth" (UID: "369053b2-11b0-4e19-a77d-3ea9cf595039") : secret "metrics-server-cert" not found Jan 23 16:34:08 crc kubenswrapper[4718]: E0123 16:34:08.979119 4718 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 16:34:08 crc kubenswrapper[4718]: E0123 16:34:08.979143 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs podName:369053b2-11b0-4e19-a77d-3ea9cf595039 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:09.979136943 +0000 UTC m=+1051.126378934 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs") pod "openstack-operator-controller-manager-74c6db8f6f-rkhth" (UID: "369053b2-11b0-4e19-a77d-3ea9cf595039") : secret "webhook-server-cert" not found Jan 23 16:34:08 crc kubenswrapper[4718]: I0123 16:34:08.991394 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs"] Jan 23 16:34:09 crc kubenswrapper[4718]: I0123 16:34:09.004556 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw"] Jan 23 16:34:09 crc kubenswrapper[4718]: I0123 16:34:09.182851 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06df7a47-9233-4957-936e-27f58aeb0000-cert\") pod \"infra-operator-controller-manager-58749ffdfb-4tm4n\" (UID: \"06df7a47-9233-4957-936e-27f58aeb0000\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" Jan 23 16:34:09 crc kubenswrapper[4718]: E0123 16:34:09.183092 4718 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 16:34:09 crc kubenswrapper[4718]: E0123 16:34:09.183195 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06df7a47-9233-4957-936e-27f58aeb0000-cert podName:06df7a47-9233-4957-936e-27f58aeb0000 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:11.183172085 +0000 UTC m=+1052.330414166 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/06df7a47-9233-4957-936e-27f58aeb0000-cert") pod "infra-operator-controller-manager-58749ffdfb-4tm4n" (UID: "06df7a47-9233-4957-936e-27f58aeb0000") : secret "infra-operator-webhook-server-cert" not found Jan 23 16:34:09 crc kubenswrapper[4718]: I0123 16:34:09.252363 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l"] Jan 23 16:34:09 crc kubenswrapper[4718]: I0123 16:34:09.288075 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-sr4hx"] Jan 23 16:34:09 crc kubenswrapper[4718]: I0123 16:34:09.294703 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk"] Jan 23 16:34:09 crc kubenswrapper[4718]: W0123 16:34:09.299073 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32d58a3a_df31_492e_a2c2_2f5ca31c5f90.slice/crio-fbc0c931301d7f4d452db87110716189a110d32f555f72a1d122e05efc85953f WatchSource:0}: Error finding container fbc0c931301d7f4d452db87110716189a110d32f555f72a1d122e05efc85953f: Status 404 returned error can't find the container with id fbc0c931301d7f4d452db87110716189a110d32f555f72a1d122e05efc85953f Jan 23 16:34:09 crc kubenswrapper[4718]: I0123 16:34:09.488896 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18395392-bb8d-49be-9b49-950d6f32b9f6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bhf98\" (UID: \"18395392-bb8d-49be-9b49-950d6f32b9f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" Jan 23 16:34:09 crc kubenswrapper[4718]: E0123 16:34:09.489284 4718 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 16:34:09 crc kubenswrapper[4718]: E0123 16:34:09.489612 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18395392-bb8d-49be-9b49-950d6f32b9f6-cert podName:18395392-bb8d-49be-9b49-950d6f32b9f6 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:11.48957636 +0000 UTC m=+1052.636818351 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/18395392-bb8d-49be-9b49-950d6f32b9f6-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" (UID: "18395392-bb8d-49be-9b49-950d6f32b9f6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 16:34:09 crc kubenswrapper[4718]: I0123 16:34:09.895810 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sr4hx" event={"ID":"8e29e3d6-21d7-4a1a-832e-f831d884fd00","Type":"ContainerStarted","Data":"a087ce7045533140b6e86d802f0f9098a9e7b08abe85914f6aa32168dd1bfd52"} Jan 23 16:34:09 crc kubenswrapper[4718]: I0123 16:34:09.900218 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk" event={"ID":"16e17ade-97be-48d4-83d4-7ac385174edd","Type":"ContainerStarted","Data":"63f8fbf9c670d65fc4edb3557916b14a4523dd87a720b5b5c142a3c86b8c5a40"} Jan 23 16:34:09 crc kubenswrapper[4718]: I0123 16:34:09.905770 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw" event={"ID":"d869ec7c-ddd9-4e17-9154-a793539a2a00","Type":"ContainerStarted","Data":"68b742d8a281bbc0fd14ec01fcd6f77173f83251058e6f9ebc4e4a3465a273b3"} Jan 23 16:34:09 crc kubenswrapper[4718]: I0123 16:34:09.909353 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" event={"ID":"32d58a3a-df31-492e-a2c2-2f5ca31c5f90","Type":"ContainerStarted","Data":"fbc0c931301d7f4d452db87110716189a110d32f555f72a1d122e05efc85953f"} Jan 23 16:34:09 crc kubenswrapper[4718]: I0123 16:34:09.919861 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l" event={"ID":"9a95eff5-116c-4141-bee6-5bda12f21e11","Type":"ContainerStarted","Data":"385520843b37798e67791cba47c3f818ade31381779f3522f4ddd8c337fe255d"} Jan 23 16:34:09 crc kubenswrapper[4718]: I0123 16:34:09.924265 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs" event={"ID":"50178034-67cf-4f8d-89bb-788c8a73a72a","Type":"ContainerStarted","Data":"78eb693209b6508f134dc0c0e6a5efa81d59e2e6fadf0ada76e290f2b86cfe51"} Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.018581 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-metrics-certs\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.018745 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:10 crc kubenswrapper[4718]: E0123 16:34:10.019046 4718 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 16:34:10 crc kubenswrapper[4718]: E0123 16:34:10.019085 4718 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 16:34:10 crc kubenswrapper[4718]: E0123 16:34:10.019188 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs podName:369053b2-11b0-4e19-a77d-3ea9cf595039 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:12.019102056 +0000 UTC m=+1053.166344047 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs") pod "openstack-operator-controller-manager-74c6db8f6f-rkhth" (UID: "369053b2-11b0-4e19-a77d-3ea9cf595039") : secret "webhook-server-cert" not found Jan 23 16:34:10 crc kubenswrapper[4718]: E0123 16:34:10.019215 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-metrics-certs podName:369053b2-11b0-4e19-a77d-3ea9cf595039 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:12.019204338 +0000 UTC m=+1053.166446329 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-metrics-certs") pod "openstack-operator-controller-manager-74c6db8f6f-rkhth" (UID: "369053b2-11b0-4e19-a77d-3ea9cf595039") : secret "metrics-server-cert" not found Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.023513 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx"] Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.093193 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck"] Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.113871 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64ktf"] Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.123466 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4"] Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.133115 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82"] Jan 23 16:34:10 crc kubenswrapper[4718]: W0123 16:34:10.158030 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49cc2143_a384_436e_8eef_4d7474918177.slice/crio-9329a6ebf3ddc340f3931d14a1c5e81c931033b50d4c6a2a86f4862313f3566b WatchSource:0}: Error finding container 9329a6ebf3ddc340f3931d14a1c5e81c931033b50d4c6a2a86f4862313f3566b: Status 404 returned error can't find the container with id 9329a6ebf3ddc340f3931d14a1c5e81c931033b50d4c6a2a86f4862313f3566b Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.165706 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-znjjw"] Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.224320 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t"] Jan 23 16:34:10 crc kubenswrapper[4718]: E0123 16:34:10.259709 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j9xbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-kn2t8_openstack-operators(2062a379-6201-4835-8974-24befcfbf8e0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 16:34:10 crc kubenswrapper[4718]: E0123 16:34:10.260958 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8" podUID="2062a379-6201-4835-8974-24befcfbf8e0" Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.316560 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k"] Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.381604 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8"] Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.945350 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64ktf" event={"ID":"235aadec-9416-469c-8455-64dd1bc82a08","Type":"ContainerStarted","Data":"db1b6e3f7414b2f6a27f88ab1e4f54e3f11475bc29efeb20b7ef30646aa5cd3b"} Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.955504 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx" event={"ID":"ae7c1f40-90dd-441b-9dc5-608e1a503f4c","Type":"ContainerStarted","Data":"55886f7e8fa1b7cbc3eaeb42c480e3b2ad2edb364fd0a2051d27ab4146213fa3"} Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.957917 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4" event={"ID":"addb55c8-8565-42c2-84d2-7ee7e8693a3a","Type":"ContainerStarted","Data":"26a6494f1fb4d25d90cad15995f480f8a3f6c2d910d1f6689e4c72b8a10d8161"} Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.961939 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k" event={"ID":"3cfce3f5-1f59-43ae-aa99-2483cfb33806","Type":"ContainerStarted","Data":"251d98501926f752312caf1c3e25bbc0c8bdd86435fa16221e47518d6e715bf1"} Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.976060 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82" event={"ID":"49cc2143-a384-436e-8eef-4d7474918177","Type":"ContainerStarted","Data":"9329a6ebf3ddc340f3931d14a1c5e81c931033b50d4c6a2a86f4862313f3566b"} Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.986223 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-znjjw" event={"ID":"0fe9ca7e-5763-4cba-afc1-94065f21f33e","Type":"ContainerStarted","Data":"6328f4ef6e01ca6820feb83ac7ed775a944f414de47658e69c0191a775f42e52"} Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.989932 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck" event={"ID":"f2fe0ff3-bfa2-4cc4-b85c-8bc89ca73078","Type":"ContainerStarted","Data":"852d0055c9b0ce16267f9a7d8aff8c7bcf302f688af15ca6c2c64e71711381ff"} Jan 23 16:34:10 crc kubenswrapper[4718]: I0123 16:34:10.991596 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t" event={"ID":"7cd4d741-2a88-466f-a644-a1c6c62e521b","Type":"ContainerStarted","Data":"e3b795366e04a5f689c74548a43b6ee24eb94d3bc4a99c79609611629ec4c27a"} Jan 23 16:34:11 crc kubenswrapper[4718]: I0123 16:34:11.000263 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8" event={"ID":"2062a379-6201-4835-8974-24befcfbf8e0","Type":"ContainerStarted","Data":"f968b6dad82b17cba02b6904af27da7e3c971c2f46b8f22187902296fb53c8bf"} Jan 23 16:34:11 crc kubenswrapper[4718]: E0123 16:34:11.002554 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8" podUID="2062a379-6201-4835-8974-24befcfbf8e0" Jan 23 16:34:11 crc kubenswrapper[4718]: I0123 16:34:11.250461 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06df7a47-9233-4957-936e-27f58aeb0000-cert\") pod \"infra-operator-controller-manager-58749ffdfb-4tm4n\" (UID: \"06df7a47-9233-4957-936e-27f58aeb0000\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" Jan 23 16:34:11 crc kubenswrapper[4718]: E0123 16:34:11.250765 4718 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 16:34:11 crc kubenswrapper[4718]: E0123 16:34:11.250843 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06df7a47-9233-4957-936e-27f58aeb0000-cert podName:06df7a47-9233-4957-936e-27f58aeb0000 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:15.250819348 +0000 UTC m=+1056.398061339 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/06df7a47-9233-4957-936e-27f58aeb0000-cert") pod "infra-operator-controller-manager-58749ffdfb-4tm4n" (UID: "06df7a47-9233-4957-936e-27f58aeb0000") : secret "infra-operator-webhook-server-cert" not found Jan 23 16:34:11 crc kubenswrapper[4718]: I0123 16:34:11.557900 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18395392-bb8d-49be-9b49-950d6f32b9f6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bhf98\" (UID: \"18395392-bb8d-49be-9b49-950d6f32b9f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" Jan 23 16:34:11 crc kubenswrapper[4718]: E0123 16:34:11.558164 4718 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 16:34:11 crc kubenswrapper[4718]: E0123 16:34:11.558299 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18395392-bb8d-49be-9b49-950d6f32b9f6-cert podName:18395392-bb8d-49be-9b49-950d6f32b9f6 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:15.558269401 +0000 UTC m=+1056.705511392 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/18395392-bb8d-49be-9b49-950d6f32b9f6-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" (UID: "18395392-bb8d-49be-9b49-950d6f32b9f6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 16:34:12 crc kubenswrapper[4718]: E0123 16:34:12.014671 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8" podUID="2062a379-6201-4835-8974-24befcfbf8e0" Jan 23 16:34:12 crc kubenswrapper[4718]: I0123 16:34:12.072517 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:12 crc kubenswrapper[4718]: I0123 16:34:12.075244 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-metrics-certs\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:12 crc kubenswrapper[4718]: E0123 16:34:12.075525 4718 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 16:34:12 crc kubenswrapper[4718]: E0123 16:34:12.075654 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs podName:369053b2-11b0-4e19-a77d-3ea9cf595039 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:16.075614156 +0000 UTC m=+1057.222856147 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs") pod "openstack-operator-controller-manager-74c6db8f6f-rkhth" (UID: "369053b2-11b0-4e19-a77d-3ea9cf595039") : secret "webhook-server-cert" not found Jan 23 16:34:12 crc kubenswrapper[4718]: E0123 16:34:12.075784 4718 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 16:34:12 crc kubenswrapper[4718]: E0123 16:34:12.075855 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-metrics-certs podName:369053b2-11b0-4e19-a77d-3ea9cf595039 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:16.075833212 +0000 UTC m=+1057.223075203 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-metrics-certs") pod "openstack-operator-controller-manager-74c6db8f6f-rkhth" (UID: "369053b2-11b0-4e19-a77d-3ea9cf595039") : secret "metrics-server-cert" not found Jan 23 16:34:15 crc kubenswrapper[4718]: I0123 16:34:15.260850 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06df7a47-9233-4957-936e-27f58aeb0000-cert\") pod \"infra-operator-controller-manager-58749ffdfb-4tm4n\" (UID: \"06df7a47-9233-4957-936e-27f58aeb0000\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" Jan 23 16:34:15 crc kubenswrapper[4718]: E0123 16:34:15.261218 4718 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 16:34:15 crc kubenswrapper[4718]: E0123 16:34:15.261508 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06df7a47-9233-4957-936e-27f58aeb0000-cert podName:06df7a47-9233-4957-936e-27f58aeb0000 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:23.261470878 +0000 UTC m=+1064.408712879 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/06df7a47-9233-4957-936e-27f58aeb0000-cert") pod "infra-operator-controller-manager-58749ffdfb-4tm4n" (UID: "06df7a47-9233-4957-936e-27f58aeb0000") : secret "infra-operator-webhook-server-cert" not found Jan 23 16:34:16 crc kubenswrapper[4718]: I0123 16:34:16.292575 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18395392-bb8d-49be-9b49-950d6f32b9f6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bhf98\" (UID: \"18395392-bb8d-49be-9b49-950d6f32b9f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" Jan 23 16:34:16 crc kubenswrapper[4718]: I0123 16:34:16.293161 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:16 crc kubenswrapper[4718]: I0123 16:34:16.294476 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-metrics-certs\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:16 crc kubenswrapper[4718]: E0123 16:34:16.294812 4718 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 16:34:16 crc kubenswrapper[4718]: E0123 16:34:16.300971 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-metrics-certs podName:369053b2-11b0-4e19-a77d-3ea9cf595039 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:24.300854525 +0000 UTC m=+1065.448096516 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-metrics-certs") pod "openstack-operator-controller-manager-74c6db8f6f-rkhth" (UID: "369053b2-11b0-4e19-a77d-3ea9cf595039") : secret "metrics-server-cert" not found Jan 23 16:34:16 crc kubenswrapper[4718]: E0123 16:34:16.301486 4718 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 16:34:16 crc kubenswrapper[4718]: E0123 16:34:16.301528 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18395392-bb8d-49be-9b49-950d6f32b9f6-cert podName:18395392-bb8d-49be-9b49-950d6f32b9f6 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:24.301519923 +0000 UTC m=+1065.448761914 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/18395392-bb8d-49be-9b49-950d6f32b9f6-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" (UID: "18395392-bb8d-49be-9b49-950d6f32b9f6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 16:34:16 crc kubenswrapper[4718]: E0123 16:34:16.301573 4718 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 16:34:16 crc kubenswrapper[4718]: E0123 16:34:16.302290 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs podName:369053b2-11b0-4e19-a77d-3ea9cf595039 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:24.302238243 +0000 UTC m=+1065.449480234 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs") pod "openstack-operator-controller-manager-74c6db8f6f-rkhth" (UID: "369053b2-11b0-4e19-a77d-3ea9cf595039") : secret "webhook-server-cert" not found Jan 23 16:34:23 crc kubenswrapper[4718]: I0123 16:34:23.320178 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06df7a47-9233-4957-936e-27f58aeb0000-cert\") pod \"infra-operator-controller-manager-58749ffdfb-4tm4n\" (UID: \"06df7a47-9233-4957-936e-27f58aeb0000\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" Jan 23 16:34:23 crc kubenswrapper[4718]: I0123 16:34:23.327236 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06df7a47-9233-4957-936e-27f58aeb0000-cert\") pod \"infra-operator-controller-manager-58749ffdfb-4tm4n\" (UID: \"06df7a47-9233-4957-936e-27f58aeb0000\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" Jan 23 16:34:23 crc kubenswrapper[4718]: I0123 16:34:23.460920 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" Jan 23 16:34:24 crc kubenswrapper[4718]: I0123 16:34:24.339708 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-metrics-certs\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:24 crc kubenswrapper[4718]: I0123 16:34:24.339784 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18395392-bb8d-49be-9b49-950d6f32b9f6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bhf98\" (UID: \"18395392-bb8d-49be-9b49-950d6f32b9f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" Jan 23 16:34:24 crc kubenswrapper[4718]: I0123 16:34:24.339922 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:24 crc kubenswrapper[4718]: E0123 16:34:24.340795 4718 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 16:34:24 crc kubenswrapper[4718]: E0123 16:34:24.340911 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs podName:369053b2-11b0-4e19-a77d-3ea9cf595039 nodeName:}" failed. No retries permitted until 2026-01-23 16:34:40.340887133 +0000 UTC m=+1081.488129114 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs") pod "openstack-operator-controller-manager-74c6db8f6f-rkhth" (UID: "369053b2-11b0-4e19-a77d-3ea9cf595039") : secret "webhook-server-cert" not found Jan 23 16:34:24 crc kubenswrapper[4718]: I0123 16:34:24.344210 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18395392-bb8d-49be-9b49-950d6f32b9f6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bhf98\" (UID: \"18395392-bb8d-49be-9b49-950d6f32b9f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" Jan 23 16:34:24 crc kubenswrapper[4718]: I0123 16:34:24.344305 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-metrics-certs\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:24 crc kubenswrapper[4718]: I0123 16:34:24.614208 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" Jan 23 16:34:26 crc kubenswrapper[4718]: E0123 16:34:26.983463 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 23 16:34:26 crc kubenswrapper[4718]: E0123 16:34:26.983936 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vr5bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-jjplg_openstack-operators(9e8950bc-8213-40eb-9bb7-2e1a8c66b57b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:34:26 crc kubenswrapper[4718]: E0123 16:34:26.985154 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg" podUID="9e8950bc-8213-40eb-9bb7-2e1a8c66b57b" Jan 23 16:34:27 crc kubenswrapper[4718]: E0123 16:34:27.444464 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg" podUID="9e8950bc-8213-40eb-9bb7-2e1a8c66b57b" Jan 23 16:34:27 crc kubenswrapper[4718]: E0123 16:34:27.561652 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 23 16:34:27 crc kubenswrapper[4718]: E0123 16:34:27.561927 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hxlng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l_openstack-operators(9a95eff5-116c-4141-bee6-5bda12f21e11): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:34:27 crc kubenswrapper[4718]: E0123 16:34:27.563152 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l" podUID="9a95eff5-116c-4141-bee6-5bda12f21e11" Jan 23 16:34:28 crc kubenswrapper[4718]: E0123 16:34:28.454499 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l" podUID="9a95eff5-116c-4141-bee6-5bda12f21e11" Jan 23 16:34:28 crc kubenswrapper[4718]: I0123 16:34:28.875865 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:34:28 crc kubenswrapper[4718]: I0123 16:34:28.875926 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:34:28 crc kubenswrapper[4718]: I0123 16:34:28.875967 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:34:28 crc kubenswrapper[4718]: I0123 16:34:28.876598 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd99bd4b2d73295643906a9aa8c3e87cbbb0c2a9c5d2e4b829796f2135ed44c3"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 16:34:28 crc kubenswrapper[4718]: I0123 16:34:28.876677 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://bd99bd4b2d73295643906a9aa8c3e87cbbb0c2a9c5d2e4b829796f2135ed44c3" gracePeriod=600 Jan 23 16:34:29 crc kubenswrapper[4718]: E0123 16:34:29.205846 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e" Jan 23 16:34:29 crc kubenswrapper[4718]: E0123 16:34:29.206285 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jrzf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-t8fsk_openstack-operators(16e17ade-97be-48d4-83d4-7ac385174edd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:34:29 crc kubenswrapper[4718]: E0123 16:34:29.207687 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk" podUID="16e17ade-97be-48d4-83d4-7ac385174edd" Jan 23 16:34:29 crc kubenswrapper[4718]: I0123 16:34:29.463647 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="bd99bd4b2d73295643906a9aa8c3e87cbbb0c2a9c5d2e4b829796f2135ed44c3" exitCode=0 Jan 23 16:34:29 crc kubenswrapper[4718]: I0123 16:34:29.463673 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"bd99bd4b2d73295643906a9aa8c3e87cbbb0c2a9c5d2e4b829796f2135ed44c3"} Jan 23 16:34:29 crc kubenswrapper[4718]: I0123 16:34:29.463736 4718 scope.go:117] "RemoveContainer" containerID="93a698fd9a68d5119c2aead8c4e3dde081f70d298b44f30d7bda86aad4caf6b2" Jan 23 16:34:29 crc kubenswrapper[4718]: E0123 16:34:29.469276 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk" podUID="16e17ade-97be-48d4-83d4-7ac385174edd" Jan 23 16:34:29 crc kubenswrapper[4718]: E0123 16:34:29.750107 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 23 16:34:29 crc kubenswrapper[4718]: E0123 16:34:29.750310 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tsmwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-9vg4k_openstack-operators(3cfce3f5-1f59-43ae-aa99-2483cfb33806): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:34:29 crc kubenswrapper[4718]: E0123 16:34:29.751878 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k" podUID="3cfce3f5-1f59-43ae-aa99-2483cfb33806" Jan 23 16:34:30 crc kubenswrapper[4718]: E0123 16:34:30.477812 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k" podUID="3cfce3f5-1f59-43ae-aa99-2483cfb33806" Jan 23 16:34:31 crc kubenswrapper[4718]: E0123 16:34:31.870537 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 23 16:34:31 crc kubenswrapper[4718]: E0123 16:34:31.871223 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jpqgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-jbxnk_openstack-operators(32d58a3a-df31-492e-a2c2-2f5ca31c5f90): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:34:31 crc kubenswrapper[4718]: E0123 16:34:31.872584 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" podUID="32d58a3a-df31-492e-a2c2-2f5ca31c5f90" Jan 23 16:34:32 crc kubenswrapper[4718]: E0123 16:34:32.492990 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" podUID="32d58a3a-df31-492e-a2c2-2f5ca31c5f90" Jan 23 16:34:34 crc kubenswrapper[4718]: E0123 16:34:34.400469 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 23 16:34:34 crc kubenswrapper[4718]: E0123 16:34:34.400872 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7lr5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-sr2hw_openstack-operators(d869ec7c-ddd9-4e17-9154-a793539a2a00): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:34:34 crc kubenswrapper[4718]: E0123 16:34:34.402245 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw" podUID="d869ec7c-ddd9-4e17-9154-a793539a2a00" Jan 23 16:34:34 crc kubenswrapper[4718]: E0123 16:34:34.525919 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw" podUID="d869ec7c-ddd9-4e17-9154-a793539a2a00" Jan 23 16:34:35 crc kubenswrapper[4718]: E0123 16:34:35.032825 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 23 16:34:35 crc kubenswrapper[4718]: E0123 16:34:35.033135 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2nnld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-87q6t_openstack-operators(7cd4d741-2a88-466f-a644-a1c6c62e521b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:34:35 crc kubenswrapper[4718]: E0123 16:34:35.034357 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t" podUID="7cd4d741-2a88-466f-a644-a1c6c62e521b" Jan 23 16:34:35 crc kubenswrapper[4718]: E0123 16:34:35.522983 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t" podUID="7cd4d741-2a88-466f-a644-a1c6c62e521b" Jan 23 16:34:36 crc kubenswrapper[4718]: E0123 16:34:36.426838 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551" Jan 23 16:34:36 crc kubenswrapper[4718]: E0123 16:34:36.427384 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6hpd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6d9458688d-5lxl4_openstack-operators(addb55c8-8565-42c2-84d2-7ee7e8693a3a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:34:36 crc kubenswrapper[4718]: E0123 16:34:36.428679 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4" podUID="addb55c8-8565-42c2-84d2-7ee7e8693a3a" Jan 23 16:34:36 crc kubenswrapper[4718]: E0123 16:34:36.529979 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4" podUID="addb55c8-8565-42c2-84d2-7ee7e8693a3a" Jan 23 16:34:36 crc kubenswrapper[4718]: E0123 16:34:36.930982 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0" Jan 23 16:34:36 crc kubenswrapper[4718]: E0123 16:34:36.931173 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nsrht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-9kl82_openstack-operators(49cc2143-a384-436e-8eef-4d7474918177): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:34:36 crc kubenswrapper[4718]: E0123 16:34:36.932429 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82" podUID="49cc2143-a384-436e-8eef-4d7474918177" Jan 23 16:34:37 crc kubenswrapper[4718]: E0123 16:34:37.538616 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82" podUID="49cc2143-a384-436e-8eef-4d7474918177" Jan 23 16:34:38 crc kubenswrapper[4718]: E0123 16:34:38.705035 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831" Jan 23 16:34:38 crc kubenswrapper[4718]: E0123 16:34:38.706656 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b5zg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-2m5hx_openstack-operators(ae7c1f40-90dd-441b-9dc5-608e1a503f4c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:34:38 crc kubenswrapper[4718]: E0123 16:34:38.707978 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx" podUID="ae7c1f40-90dd-441b-9dc5-608e1a503f4c" Jan 23 16:34:39 crc kubenswrapper[4718]: E0123 16:34:39.569514 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx" podUID="ae7c1f40-90dd-441b-9dc5-608e1a503f4c" Jan 23 16:34:40 crc kubenswrapper[4718]: E0123 16:34:40.021958 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.47:5001/openstack-k8s-operators/telemetry-operator:eb64f15362ce8fe083224b8876330e95b4455acc" Jan 23 16:34:40 crc kubenswrapper[4718]: E0123 16:34:40.022236 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.47:5001/openstack-k8s-operators/telemetry-operator:eb64f15362ce8fe083224b8876330e95b4455acc" Jan 23 16:34:40 crc kubenswrapper[4718]: E0123 16:34:40.022411 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.47:5001/openstack-k8s-operators/telemetry-operator:eb64f15362ce8fe083224b8876330e95b4455acc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kww8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7c7754d696-xthck_openstack-operators(f2fe0ff3-bfa2-4cc4-b85c-8bc89ca73078): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:34:40 crc kubenswrapper[4718]: E0123 16:34:40.023679 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck" podUID="f2fe0ff3-bfa2-4cc4-b85c-8bc89ca73078" Jan 23 16:34:40 crc kubenswrapper[4718]: I0123 16:34:40.388991 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:40 crc kubenswrapper[4718]: I0123 16:34:40.396464 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/369053b2-11b0-4e19-a77d-3ea9cf595039-webhook-certs\") pod \"openstack-operator-controller-manager-74c6db8f6f-rkhth\" (UID: \"369053b2-11b0-4e19-a77d-3ea9cf595039\") " pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:40 crc kubenswrapper[4718]: I0123 16:34:40.440136 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-dr7ll" Jan 23 16:34:40 crc kubenswrapper[4718]: I0123 16:34:40.447454 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:40 crc kubenswrapper[4718]: E0123 16:34:40.574840 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.47:5001/openstack-k8s-operators/telemetry-operator:eb64f15362ce8fe083224b8876330e95b4455acc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck" podUID="f2fe0ff3-bfa2-4cc4-b85c-8bc89ca73078" Jan 23 16:34:40 crc kubenswrapper[4718]: E0123 16:34:40.914864 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 23 16:34:40 crc kubenswrapper[4718]: E0123 16:34:40.915344 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8lcb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-nwpcs_openstack-operators(50178034-67cf-4f8d-89bb-788c8a73a72a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:34:40 crc kubenswrapper[4718]: E0123 16:34:40.916704 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs" podUID="50178034-67cf-4f8d-89bb-788c8a73a72a" Jan 23 16:34:41 crc kubenswrapper[4718]: E0123 16:34:41.590186 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs" podUID="50178034-67cf-4f8d-89bb-788c8a73a72a" Jan 23 16:34:42 crc kubenswrapper[4718]: E0123 16:34:42.404269 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 23 16:34:42 crc kubenswrapper[4718]: E0123 16:34:42.404489 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-srvh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-64ktf_openstack-operators(235aadec-9416-469c-8455-64dd1bc82a08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:34:42 crc kubenswrapper[4718]: E0123 16:34:42.405724 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64ktf" podUID="235aadec-9416-469c-8455-64dd1bc82a08" Jan 23 16:34:42 crc kubenswrapper[4718]: E0123 16:34:42.680568 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64ktf" podUID="235aadec-9416-469c-8455-64dd1bc82a08" Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.046053 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth"] Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.179331 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98"] Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.198958 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n"] Jan 23 16:34:43 crc kubenswrapper[4718]: W0123 16:34:43.216510 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18395392_bb8d_49be_9b49_950d6f32b9f6.slice/crio-705df63dd3037891c52270baf6cd2030d5ce80aee3b6593831c12cc232a82b7b WatchSource:0}: Error finding container 705df63dd3037891c52270baf6cd2030d5ce80aee3b6593831c12cc232a82b7b: Status 404 returned error can't find the container with id 705df63dd3037891c52270baf6cd2030d5ce80aee3b6593831c12cc232a82b7b Jan 23 16:34:43 crc kubenswrapper[4718]: W0123 16:34:43.231816 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06df7a47_9233_4957_936e_27f58aeb0000.slice/crio-564a47375ada0311e57621b432fd0d0aaf8d2b2aa2100f20fbf5abe5cfb69d57 WatchSource:0}: Error finding container 564a47375ada0311e57621b432fd0d0aaf8d2b2aa2100f20fbf5abe5cfb69d57: Status 404 returned error can't find the container with id 564a47375ada0311e57621b432fd0d0aaf8d2b2aa2100f20fbf5abe5cfb69d57 Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.685876 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" event={"ID":"18395392-bb8d-49be-9b49-950d6f32b9f6","Type":"ContainerStarted","Data":"705df63dd3037891c52270baf6cd2030d5ce80aee3b6593831c12cc232a82b7b"} Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.704402 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-znjjw" event={"ID":"0fe9ca7e-5763-4cba-afc1-94065f21f33e","Type":"ContainerStarted","Data":"fc19cc4ef38e7d6e4a162714e7865fb4fd71a442219981f84780252246ae5c59"} Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.705554 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-znjjw" Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.724355 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" event={"ID":"369053b2-11b0-4e19-a77d-3ea9cf595039","Type":"ContainerStarted","Data":"743ff81f2c888596c4886d9773a334626ae90653aaba5ec40bae715477ec071c"} Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.724389 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" event={"ID":"369053b2-11b0-4e19-a77d-3ea9cf595039","Type":"ContainerStarted","Data":"2910c8ae0ea4eb7f547486d82c8bf3bb8efa9339f9781bde9a9119ea2eb55c48"} Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.755829 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9r22c" event={"ID":"a6012879-2e20-485d-829f-3a9ec3e5bcb1","Type":"ContainerStarted","Data":"786010b0c726ca3176c7fb8a9778406474ba04a21b372e4986ad91eca11d603d"} Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.757085 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9r22c" Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.768285 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-znjjw" podStartSLOduration=4.5883206770000005 podStartE2EDuration="36.768259338s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:10.205724495 +0000 UTC m=+1051.352966476" lastFinishedPulling="2026-01-23 16:34:42.385663156 +0000 UTC m=+1083.532905137" observedRunningTime="2026-01-23 16:34:43.754946686 +0000 UTC m=+1084.902188677" watchObservedRunningTime="2026-01-23 16:34:43.768259338 +0000 UTC m=+1084.915501329" Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.777718 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk" event={"ID":"8d9099e2-7f4f-42d8-8e76-d2d8347a1514","Type":"ContainerStarted","Data":"19e77281eb21db27f677af1cf4b5b4ba0ddd4a6dae9ae9a5b0d8746199450665"} Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.777795 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk" Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.814998 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9r22c" podStartSLOduration=2.970779313 podStartE2EDuration="36.814970127s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:08.575765994 +0000 UTC m=+1049.723007985" lastFinishedPulling="2026-01-23 16:34:42.419956808 +0000 UTC m=+1083.567198799" observedRunningTime="2026-01-23 16:34:43.809648192 +0000 UTC m=+1084.956890183" watchObservedRunningTime="2026-01-23 16:34:43.814970127 +0000 UTC m=+1084.962212118" Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.822299 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l" event={"ID":"9a95eff5-116c-4141-bee6-5bda12f21e11","Type":"ContainerStarted","Data":"09f3c77a6842a9fecbcf17ca6fc40bbd6c667f091d8b041a12cdba71442c5434"} Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.823302 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l" Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.876436 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"90a1176f010d8fdadb1a7f6d6d0caefb9ea6ac28d367938b6700683923e3d094"} Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.936985 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc" event={"ID":"858bcd70-b537-4da9-8ca9-27c1724ece99","Type":"ContainerStarted","Data":"b33c5929a0b3781f558c3cb8178dc1ca9f4e5ad1d5de54f83fc31428193aed74"} Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.938004 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc" Jan 23 16:34:43 crc kubenswrapper[4718]: I0123 16:34:43.956818 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk" podStartSLOduration=3.233319574 podStartE2EDuration="36.956789479s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:08.662188031 +0000 UTC m=+1049.809430022" lastFinishedPulling="2026-01-23 16:34:42.385657936 +0000 UTC m=+1083.532899927" observedRunningTime="2026-01-23 16:34:43.89386019 +0000 UTC m=+1085.041102181" watchObservedRunningTime="2026-01-23 16:34:43.956789479 +0000 UTC m=+1085.104031470" Jan 23 16:34:44 crc kubenswrapper[4718]: I0123 16:34:44.001799 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8" event={"ID":"2062a379-6201-4835-8974-24befcfbf8e0","Type":"ContainerStarted","Data":"9947294c74a66e9711cc2fefff6cd84b01e8f038de836477523ee076630beb0e"} Jan 23 16:34:44 crc kubenswrapper[4718]: I0123 16:34:44.002876 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8" Jan 23 16:34:44 crc kubenswrapper[4718]: I0123 16:34:44.025855 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg" event={"ID":"9e8950bc-8213-40eb-9bb7-2e1a8c66b57b","Type":"ContainerStarted","Data":"4061d1849c7eff9eb545852440accf3fa7fe07d4c1864e155b32f0b4c051a4fb"} Jan 23 16:34:44 crc kubenswrapper[4718]: I0123 16:34:44.027031 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg" Jan 23 16:34:44 crc kubenswrapper[4718]: I0123 16:34:44.062082 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sr4hx" event={"ID":"8e29e3d6-21d7-4a1a-832e-f831d884fd00","Type":"ContainerStarted","Data":"537d4660efcb30fa9d53ae34e801a4e9fdccb49606e222e8c5b93e2a8bf0f474"} Jan 23 16:34:44 crc kubenswrapper[4718]: I0123 16:34:44.062680 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sr4hx" Jan 23 16:34:44 crc kubenswrapper[4718]: I0123 16:34:44.063708 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2" event={"ID":"d91ca0c9-05fb-4e8b-9581-f2c3d025c0e2","Type":"ContainerStarted","Data":"9f573ab20f54561646018560348f723cad8785c5f12cb55728acbb2a9199212a"} Jan 23 16:34:44 crc kubenswrapper[4718]: I0123 16:34:44.064038 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2" Jan 23 16:34:44 crc kubenswrapper[4718]: I0123 16:34:44.052599 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l" podStartSLOduration=3.821853404 podStartE2EDuration="37.052577552s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:09.288214349 +0000 UTC m=+1050.435456330" lastFinishedPulling="2026-01-23 16:34:42.518938477 +0000 UTC m=+1083.666180478" observedRunningTime="2026-01-23 16:34:43.962272938 +0000 UTC m=+1085.109514929" watchObservedRunningTime="2026-01-23 16:34:44.052577552 +0000 UTC m=+1085.199819533" Jan 23 16:34:44 crc kubenswrapper[4718]: I0123 16:34:44.092940 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" event={"ID":"06df7a47-9233-4957-936e-27f58aeb0000","Type":"ContainerStarted","Data":"564a47375ada0311e57621b432fd0d0aaf8d2b2aa2100f20fbf5abe5cfb69d57"} Jan 23 16:34:44 crc kubenswrapper[4718]: I0123 16:34:44.129529 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k" event={"ID":"3cfce3f5-1f59-43ae-aa99-2483cfb33806","Type":"ContainerStarted","Data":"311cdf16450137a92bc349afc901ebfca385b8b41d003779b30f690116ed5cf3"} Jan 23 16:34:44 crc kubenswrapper[4718]: I0123 16:34:44.130549 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k" Jan 23 16:34:44 crc kubenswrapper[4718]: I0123 16:34:44.137741 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8" podStartSLOduration=4.878124812 podStartE2EDuration="37.137716805s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:10.259411785 +0000 UTC m=+1051.406653776" lastFinishedPulling="2026-01-23 16:34:42.519003778 +0000 UTC m=+1083.666245769" observedRunningTime="2026-01-23 16:34:44.068858064 +0000 UTC m=+1085.216100055" watchObservedRunningTime="2026-01-23 16:34:44.137716805 +0000 UTC m=+1085.284958796" Jan 23 16:34:44 crc kubenswrapper[4718]: I0123 16:34:44.144123 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc" podStartSLOduration=5.572454893 podStartE2EDuration="37.144099478s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:08.468250232 +0000 UTC m=+1049.615492223" lastFinishedPulling="2026-01-23 16:34:40.039894807 +0000 UTC m=+1081.187136808" observedRunningTime="2026-01-23 16:34:44.115686306 +0000 UTC m=+1085.262928297" watchObservedRunningTime="2026-01-23 16:34:44.144099478 +0000 UTC m=+1085.291341469" Jan 23 16:34:44 crc kubenswrapper[4718]: I0123 16:34:44.201706 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sr4hx" podStartSLOduration=4.11065853 podStartE2EDuration="37.201679123s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:09.294815498 +0000 UTC m=+1050.442057489" lastFinishedPulling="2026-01-23 16:34:42.385836091 +0000 UTC m=+1083.533078082" observedRunningTime="2026-01-23 16:34:44.153916245 +0000 UTC m=+1085.301158246" watchObservedRunningTime="2026-01-23 16:34:44.201679123 +0000 UTC m=+1085.348921114" Jan 23 16:34:44 crc kubenswrapper[4718]: I0123 16:34:44.235375 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2" podStartSLOduration=5.791665819 podStartE2EDuration="37.235355818s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:08.599856158 +0000 UTC m=+1049.747098149" lastFinishedPulling="2026-01-23 16:34:40.043546147 +0000 UTC m=+1081.190788148" observedRunningTime="2026-01-23 16:34:44.22808003 +0000 UTC m=+1085.375322021" watchObservedRunningTime="2026-01-23 16:34:44.235355818 +0000 UTC m=+1085.382597809" Jan 23 16:34:44 crc kubenswrapper[4718]: I0123 16:34:44.238184 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg" podStartSLOduration=3.4523868970000002 podStartE2EDuration="37.238161414s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:08.773734792 +0000 UTC m=+1049.920976783" lastFinishedPulling="2026-01-23 16:34:42.559509309 +0000 UTC m=+1083.706751300" observedRunningTime="2026-01-23 16:34:44.189973265 +0000 UTC m=+1085.337215256" watchObservedRunningTime="2026-01-23 16:34:44.238161414 +0000 UTC m=+1085.385403395" Jan 23 16:34:44 crc kubenswrapper[4718]: I0123 16:34:44.277717 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k" podStartSLOduration=4.664262671 podStartE2EDuration="37.277698908s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:10.171988569 +0000 UTC m=+1051.319230560" lastFinishedPulling="2026-01-23 16:34:42.785424806 +0000 UTC m=+1083.932666797" observedRunningTime="2026-01-23 16:34:44.273740951 +0000 UTC m=+1085.420982952" watchObservedRunningTime="2026-01-23 16:34:44.277698908 +0000 UTC m=+1085.424940889" Jan 23 16:34:45 crc kubenswrapper[4718]: I0123 16:34:45.169524 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" podStartSLOduration=38.169502656 podStartE2EDuration="38.169502656s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:34:45.167019398 +0000 UTC m=+1086.314261389" watchObservedRunningTime="2026-01-23 16:34:45.169502656 +0000 UTC m=+1086.316744657" Jan 23 16:34:46 crc kubenswrapper[4718]: I0123 16:34:46.158174 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk" event={"ID":"16e17ade-97be-48d4-83d4-7ac385174edd","Type":"ContainerStarted","Data":"ba53d3fb91ce798c9a2db16e8460e2ee5bfc436dde38d6ec5d466009242a0b14"} Jan 23 16:34:46 crc kubenswrapper[4718]: I0123 16:34:46.159648 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk" Jan 23 16:34:46 crc kubenswrapper[4718]: I0123 16:34:46.200232 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk" podStartSLOduration=2.7366015900000003 podStartE2EDuration="39.200198597s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:08.977095427 +0000 UTC m=+1050.124337418" lastFinishedPulling="2026-01-23 16:34:45.440692434 +0000 UTC m=+1086.587934425" observedRunningTime="2026-01-23 16:34:46.189327412 +0000 UTC m=+1087.336569403" watchObservedRunningTime="2026-01-23 16:34:46.200198597 +0000 UTC m=+1087.347440588" Jan 23 16:34:47 crc kubenswrapper[4718]: I0123 16:34:47.423965 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc" Jan 23 16:34:47 crc kubenswrapper[4718]: I0123 16:34:47.619352 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2" Jan 23 16:34:48 crc kubenswrapper[4718]: I0123 16:34:48.039303 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8" Jan 23 16:34:48 crc kubenswrapper[4718]: I0123 16:34:48.190696 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-znjjw" Jan 23 16:34:48 crc kubenswrapper[4718]: I0123 16:34:48.409425 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k" Jan 23 16:34:49 crc kubenswrapper[4718]: I0123 16:34:49.209553 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" event={"ID":"32d58a3a-df31-492e-a2c2-2f5ca31c5f90","Type":"ContainerStarted","Data":"6e89d73e544a86660f5990f301ce265ea87879dc7169ad94c57fbf1c78cb7c94"} Jan 23 16:34:49 crc kubenswrapper[4718]: I0123 16:34:49.210820 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" Jan 23 16:34:49 crc kubenswrapper[4718]: I0123 16:34:49.214441 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4" event={"ID":"addb55c8-8565-42c2-84d2-7ee7e8693a3a","Type":"ContainerStarted","Data":"a2a67946d259ad0eb4d1623bd377351a6d24bc09af85dfb9bf1d9e9a08ceafcf"} Jan 23 16:34:49 crc kubenswrapper[4718]: I0123 16:34:49.216637 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" event={"ID":"06df7a47-9233-4957-936e-27f58aeb0000","Type":"ContainerStarted","Data":"727ba97fcc16b81596864fe6ad4ea709d90c44d08d6307117a7d9307d839a50e"} Jan 23 16:34:49 crc kubenswrapper[4718]: I0123 16:34:49.218316 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82" event={"ID":"49cc2143-a384-436e-8eef-4d7474918177","Type":"ContainerStarted","Data":"b77968f188bc7982aa26ee5dd5ac9b0479c1eba90dd5efc5859eddf74bb5c2bc"} Jan 23 16:34:49 crc kubenswrapper[4718]: I0123 16:34:49.218508 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82" Jan 23 16:34:49 crc kubenswrapper[4718]: I0123 16:34:49.220134 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" event={"ID":"18395392-bb8d-49be-9b49-950d6f32b9f6","Type":"ContainerStarted","Data":"a366ca497ff23eae3b0d9d70e9f3336d685313197f5ccd1b5413aceedd2072c9"} Jan 23 16:34:49 crc kubenswrapper[4718]: I0123 16:34:49.220288 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" Jan 23 16:34:49 crc kubenswrapper[4718]: I0123 16:34:49.247617 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" podStartSLOduration=3.043601262 podStartE2EDuration="42.247586088s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:09.303997388 +0000 UTC m=+1050.451239379" lastFinishedPulling="2026-01-23 16:34:48.507982214 +0000 UTC m=+1089.655224205" observedRunningTime="2026-01-23 16:34:49.241177234 +0000 UTC m=+1090.388419225" watchObservedRunningTime="2026-01-23 16:34:49.247586088 +0000 UTC m=+1090.394828079" Jan 23 16:34:49 crc kubenswrapper[4718]: I0123 16:34:49.295533 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" podStartSLOduration=37.025960389 podStartE2EDuration="42.295500819s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:43.237233772 +0000 UTC m=+1084.384475763" lastFinishedPulling="2026-01-23 16:34:48.506774202 +0000 UTC m=+1089.654016193" observedRunningTime="2026-01-23 16:34:49.294498492 +0000 UTC m=+1090.441740483" watchObservedRunningTime="2026-01-23 16:34:49.295500819 +0000 UTC m=+1090.442742810" Jan 23 16:34:49 crc kubenswrapper[4718]: I0123 16:34:49.324997 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82" podStartSLOduration=3.827717644 podStartE2EDuration="42.32497239s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:10.259169248 +0000 UTC m=+1051.406411239" lastFinishedPulling="2026-01-23 16:34:48.756423994 +0000 UTC m=+1089.903665985" observedRunningTime="2026-01-23 16:34:49.317337293 +0000 UTC m=+1090.464579284" watchObservedRunningTime="2026-01-23 16:34:49.32497239 +0000 UTC m=+1090.472214381" Jan 23 16:34:49 crc kubenswrapper[4718]: I0123 16:34:49.349806 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4" podStartSLOduration=3.997415163 podStartE2EDuration="42.349782074s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:10.160155847 +0000 UTC m=+1051.307397838" lastFinishedPulling="2026-01-23 16:34:48.512522748 +0000 UTC m=+1089.659764749" observedRunningTime="2026-01-23 16:34:49.3397049 +0000 UTC m=+1090.486946891" watchObservedRunningTime="2026-01-23 16:34:49.349782074 +0000 UTC m=+1090.497024065" Jan 23 16:34:49 crc kubenswrapper[4718]: I0123 16:34:49.377645 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" podStartSLOduration=37.090964805 podStartE2EDuration="42.37760987s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:43.220778144 +0000 UTC m=+1084.368020135" lastFinishedPulling="2026-01-23 16:34:48.507423209 +0000 UTC m=+1089.654665200" observedRunningTime="2026-01-23 16:34:49.376080999 +0000 UTC m=+1090.523322990" watchObservedRunningTime="2026-01-23 16:34:49.37760987 +0000 UTC m=+1090.524851861" Jan 23 16:34:50 crc kubenswrapper[4718]: I0123 16:34:50.231352 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw" event={"ID":"d869ec7c-ddd9-4e17-9154-a793539a2a00","Type":"ContainerStarted","Data":"8d638ecf3d8d146cee0b9fbeb7f3465d0679134799cbd20dcaaab9f90e6fc8ea"} Jan 23 16:34:50 crc kubenswrapper[4718]: I0123 16:34:50.232222 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw" Jan 23 16:34:50 crc kubenswrapper[4718]: I0123 16:34:50.232831 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" Jan 23 16:34:50 crc kubenswrapper[4718]: I0123 16:34:50.252718 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw" podStartSLOduration=2.65080675 podStartE2EDuration="43.252682364s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:09.032242935 +0000 UTC m=+1050.179484926" lastFinishedPulling="2026-01-23 16:34:49.634118549 +0000 UTC m=+1090.781360540" observedRunningTime="2026-01-23 16:34:50.249458696 +0000 UTC m=+1091.396700697" watchObservedRunningTime="2026-01-23 16:34:50.252682364 +0000 UTC m=+1091.399924365" Jan 23 16:34:50 crc kubenswrapper[4718]: I0123 16:34:50.450899 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:50 crc kubenswrapper[4718]: I0123 16:34:50.458493 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 16:34:51 crc kubenswrapper[4718]: I0123 16:34:51.315519 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t" event={"ID":"7cd4d741-2a88-466f-a644-a1c6c62e521b","Type":"ContainerStarted","Data":"f29da7f363d1abe368b0af3a37b4994f45b65676a87f8eda0f6246aa29c16d8c"} Jan 23 16:34:51 crc kubenswrapper[4718]: I0123 16:34:51.317163 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t" Jan 23 16:34:51 crc kubenswrapper[4718]: I0123 16:34:51.334790 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t" podStartSLOduration=3.972670362 podStartE2EDuration="44.334753071s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:10.281245417 +0000 UTC m=+1051.428487408" lastFinishedPulling="2026-01-23 16:34:50.643328126 +0000 UTC m=+1091.790570117" observedRunningTime="2026-01-23 16:34:51.33362205 +0000 UTC m=+1092.480864041" watchObservedRunningTime="2026-01-23 16:34:51.334753071 +0000 UTC m=+1092.481995062" Jan 23 16:34:52 crc kubenswrapper[4718]: I0123 16:34:52.327017 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck" event={"ID":"f2fe0ff3-bfa2-4cc4-b85c-8bc89ca73078","Type":"ContainerStarted","Data":"7b9e89600cb9690538cf5b0ab4101d4d17d5d6e808963977d62639be9b3db8db"} Jan 23 16:34:52 crc kubenswrapper[4718]: I0123 16:34:52.327723 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck" Jan 23 16:34:52 crc kubenswrapper[4718]: I0123 16:34:52.342494 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck" podStartSLOduration=4.158979693 podStartE2EDuration="45.342475918s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:10.018931221 +0000 UTC m=+1051.166173212" lastFinishedPulling="2026-01-23 16:34:51.202427446 +0000 UTC m=+1092.349669437" observedRunningTime="2026-01-23 16:34:52.340616857 +0000 UTC m=+1093.487858858" watchObservedRunningTime="2026-01-23 16:34:52.342475918 +0000 UTC m=+1093.489717909" Jan 23 16:34:53 crc kubenswrapper[4718]: I0123 16:34:53.470429 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" Jan 23 16:34:54 crc kubenswrapper[4718]: I0123 16:34:54.622783 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" Jan 23 16:34:55 crc kubenswrapper[4718]: I0123 16:34:55.368062 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx" event={"ID":"ae7c1f40-90dd-441b-9dc5-608e1a503f4c","Type":"ContainerStarted","Data":"fa7fd7a29c5835685daf0e113f2eb1030f559f83dd34c082e56c081cf0ed0eae"} Jan 23 16:34:55 crc kubenswrapper[4718]: I0123 16:34:55.368365 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx" Jan 23 16:34:55 crc kubenswrapper[4718]: I0123 16:34:55.392311 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx" podStartSLOduration=3.840855101 podStartE2EDuration="48.392284645s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:10.10652076 +0000 UTC m=+1051.253762751" lastFinishedPulling="2026-01-23 16:34:54.657950294 +0000 UTC m=+1095.805192295" observedRunningTime="2026-01-23 16:34:55.387899125 +0000 UTC m=+1096.535141116" watchObservedRunningTime="2026-01-23 16:34:55.392284645 +0000 UTC m=+1096.539526636" Jan 23 16:34:56 crc kubenswrapper[4718]: I0123 16:34:56.385542 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64ktf" event={"ID":"235aadec-9416-469c-8455-64dd1bc82a08","Type":"ContainerStarted","Data":"d1ac29f31878ae79ebec31ec143c9d5c92dc31d4c2e0ea17addcaa62966fe5ff"} Jan 23 16:34:56 crc kubenswrapper[4718]: I0123 16:34:56.398491 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64ktf" podStartSLOduration=3.886757999 podStartE2EDuration="49.39846528s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:10.137748899 +0000 UTC m=+1051.284990890" lastFinishedPulling="2026-01-23 16:34:55.64945616 +0000 UTC m=+1096.796698171" observedRunningTime="2026-01-23 16:34:56.397599496 +0000 UTC m=+1097.544841497" watchObservedRunningTime="2026-01-23 16:34:56.39846528 +0000 UTC m=+1097.545707271" Jan 23 16:34:57 crc kubenswrapper[4718]: I0123 16:34:57.450576 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9r22c" Jan 23 16:34:57 crc kubenswrapper[4718]: I0123 16:34:57.502761 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk" Jan 23 16:34:57 crc kubenswrapper[4718]: I0123 16:34:57.550563 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg" Jan 23 16:34:57 crc kubenswrapper[4718]: I0123 16:34:57.719543 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw" Jan 23 16:34:57 crc kubenswrapper[4718]: I0123 16:34:57.754432 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk" Jan 23 16:34:57 crc kubenswrapper[4718]: I0123 16:34:57.927196 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" Jan 23 16:34:57 crc kubenswrapper[4718]: I0123 16:34:57.954317 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l" Jan 23 16:34:57 crc kubenswrapper[4718]: I0123 16:34:57.956516 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sr4hx" Jan 23 16:34:58 crc kubenswrapper[4718]: I0123 16:34:58.265367 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82" Jan 23 16:34:58 crc kubenswrapper[4718]: I0123 16:34:58.335641 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t" Jan 23 16:34:58 crc kubenswrapper[4718]: I0123 16:34:58.381585 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck" Jan 23 16:34:58 crc kubenswrapper[4718]: I0123 16:34:58.417928 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs" event={"ID":"50178034-67cf-4f8d-89bb-788c8a73a72a","Type":"ContainerStarted","Data":"9c2b9bb1d06bbc3fc3a37f8227133317586436fc3ef5a17821fdb361171c39c3"} Jan 23 16:34:58 crc kubenswrapper[4718]: I0123 16:34:58.419219 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs" Jan 23 16:34:58 crc kubenswrapper[4718]: I0123 16:34:58.441340 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs" podStartSLOduration=2.742301527 podStartE2EDuration="51.441322899s" podCreationTimestamp="2026-01-23 16:34:07 +0000 UTC" firstStartedPulling="2026-01-23 16:34:08.984583881 +0000 UTC m=+1050.131825872" lastFinishedPulling="2026-01-23 16:34:57.683605243 +0000 UTC m=+1098.830847244" observedRunningTime="2026-01-23 16:34:58.437439264 +0000 UTC m=+1099.584681255" watchObservedRunningTime="2026-01-23 16:34:58.441322899 +0000 UTC m=+1099.588564890" Jan 23 16:34:58 crc kubenswrapper[4718]: I0123 16:34:58.509267 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4" Jan 23 16:34:58 crc kubenswrapper[4718]: I0123 16:34:58.511774 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4" Jan 23 16:35:07 crc kubenswrapper[4718]: I0123 16:35:07.896333 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs" Jan 23 16:35:07 crc kubenswrapper[4718]: I0123 16:35:07.977360 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx" Jan 23 16:35:25 crc kubenswrapper[4718]: I0123 16:35:25.888323 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rhdfs"] Jan 23 16:35:25 crc kubenswrapper[4718]: I0123 16:35:25.897311 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rhdfs" Jan 23 16:35:25 crc kubenswrapper[4718]: I0123 16:35:25.899114 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-w48kl" Jan 23 16:35:25 crc kubenswrapper[4718]: I0123 16:35:25.905210 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 23 16:35:25 crc kubenswrapper[4718]: I0123 16:35:25.905459 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 23 16:35:25 crc kubenswrapper[4718]: I0123 16:35:25.905581 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 23 16:35:25 crc kubenswrapper[4718]: I0123 16:35:25.910309 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rhdfs"] Jan 23 16:35:25 crc kubenswrapper[4718]: I0123 16:35:25.963814 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pn864"] Jan 23 16:35:25 crc kubenswrapper[4718]: I0123 16:35:25.970233 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pn864" Jan 23 16:35:25 crc kubenswrapper[4718]: I0123 16:35:25.973734 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 23 16:35:25 crc kubenswrapper[4718]: I0123 16:35:25.978597 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pn864"] Jan 23 16:35:26 crc kubenswrapper[4718]: I0123 16:35:26.012073 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m72nn\" (UniqueName: \"kubernetes.io/projected/b6c7de8a-ddef-4ca7-96a6-0ee096e226e7-kube-api-access-m72nn\") pod \"dnsmasq-dns-675f4bcbfc-rhdfs\" (UID: \"b6c7de8a-ddef-4ca7-96a6-0ee096e226e7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rhdfs" Jan 23 16:35:26 crc kubenswrapper[4718]: I0123 16:35:26.012238 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c7de8a-ddef-4ca7-96a6-0ee096e226e7-config\") pod \"dnsmasq-dns-675f4bcbfc-rhdfs\" (UID: \"b6c7de8a-ddef-4ca7-96a6-0ee096e226e7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rhdfs" Jan 23 16:35:26 crc kubenswrapper[4718]: I0123 16:35:26.114148 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac038f8d-d946-4154-9b3c-449f6cba7e81-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pn864\" (UID: \"ac038f8d-d946-4154-9b3c-449f6cba7e81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pn864" Jan 23 16:35:26 crc kubenswrapper[4718]: I0123 16:35:26.114270 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac038f8d-d946-4154-9b3c-449f6cba7e81-config\") pod \"dnsmasq-dns-78dd6ddcc-pn864\" (UID: \"ac038f8d-d946-4154-9b3c-449f6cba7e81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pn864" Jan 23 16:35:26 crc kubenswrapper[4718]: I0123 16:35:26.114327 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c7de8a-ddef-4ca7-96a6-0ee096e226e7-config\") pod \"dnsmasq-dns-675f4bcbfc-rhdfs\" (UID: \"b6c7de8a-ddef-4ca7-96a6-0ee096e226e7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rhdfs" Jan 23 16:35:26 crc kubenswrapper[4718]: I0123 16:35:26.114544 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m72nn\" (UniqueName: \"kubernetes.io/projected/b6c7de8a-ddef-4ca7-96a6-0ee096e226e7-kube-api-access-m72nn\") pod \"dnsmasq-dns-675f4bcbfc-rhdfs\" (UID: \"b6c7de8a-ddef-4ca7-96a6-0ee096e226e7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rhdfs" Jan 23 16:35:26 crc kubenswrapper[4718]: I0123 16:35:26.114651 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwvch\" (UniqueName: \"kubernetes.io/projected/ac038f8d-d946-4154-9b3c-449f6cba7e81-kube-api-access-cwvch\") pod \"dnsmasq-dns-78dd6ddcc-pn864\" (UID: \"ac038f8d-d946-4154-9b3c-449f6cba7e81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pn864" Jan 23 16:35:26 crc kubenswrapper[4718]: I0123 16:35:26.115226 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c7de8a-ddef-4ca7-96a6-0ee096e226e7-config\") pod \"dnsmasq-dns-675f4bcbfc-rhdfs\" (UID: \"b6c7de8a-ddef-4ca7-96a6-0ee096e226e7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rhdfs" Jan 23 16:35:26 crc kubenswrapper[4718]: I0123 16:35:26.140612 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m72nn\" (UniqueName: \"kubernetes.io/projected/b6c7de8a-ddef-4ca7-96a6-0ee096e226e7-kube-api-access-m72nn\") pod \"dnsmasq-dns-675f4bcbfc-rhdfs\" (UID: \"b6c7de8a-ddef-4ca7-96a6-0ee096e226e7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rhdfs" Jan 23 16:35:26 crc kubenswrapper[4718]: I0123 16:35:26.216245 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwvch\" (UniqueName: \"kubernetes.io/projected/ac038f8d-d946-4154-9b3c-449f6cba7e81-kube-api-access-cwvch\") pod \"dnsmasq-dns-78dd6ddcc-pn864\" (UID: \"ac038f8d-d946-4154-9b3c-449f6cba7e81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pn864" Jan 23 16:35:26 crc kubenswrapper[4718]: I0123 16:35:26.216317 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac038f8d-d946-4154-9b3c-449f6cba7e81-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pn864\" (UID: \"ac038f8d-d946-4154-9b3c-449f6cba7e81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pn864" Jan 23 16:35:26 crc kubenswrapper[4718]: I0123 16:35:26.216362 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac038f8d-d946-4154-9b3c-449f6cba7e81-config\") pod \"dnsmasq-dns-78dd6ddcc-pn864\" (UID: \"ac038f8d-d946-4154-9b3c-449f6cba7e81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pn864" Jan 23 16:35:26 crc kubenswrapper[4718]: I0123 16:35:26.217301 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac038f8d-d946-4154-9b3c-449f6cba7e81-config\") pod \"dnsmasq-dns-78dd6ddcc-pn864\" (UID: \"ac038f8d-d946-4154-9b3c-449f6cba7e81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pn864" Jan 23 16:35:26 crc kubenswrapper[4718]: I0123 16:35:26.217419 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac038f8d-d946-4154-9b3c-449f6cba7e81-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pn864\" (UID: \"ac038f8d-d946-4154-9b3c-449f6cba7e81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pn864" Jan 23 16:35:26 crc kubenswrapper[4718]: I0123 16:35:26.233979 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwvch\" (UniqueName: \"kubernetes.io/projected/ac038f8d-d946-4154-9b3c-449f6cba7e81-kube-api-access-cwvch\") pod \"dnsmasq-dns-78dd6ddcc-pn864\" (UID: \"ac038f8d-d946-4154-9b3c-449f6cba7e81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pn864" Jan 23 16:35:26 crc kubenswrapper[4718]: I0123 16:35:26.244224 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rhdfs" Jan 23 16:35:26 crc kubenswrapper[4718]: I0123 16:35:26.290129 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pn864" Jan 23 16:35:26 crc kubenswrapper[4718]: I0123 16:35:26.816969 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rhdfs"] Jan 23 16:35:27 crc kubenswrapper[4718]: I0123 16:35:27.013439 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pn864"] Jan 23 16:35:27 crc kubenswrapper[4718]: W0123 16:35:27.020368 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac038f8d_d946_4154_9b3c_449f6cba7e81.slice/crio-d57b944314cc0eddad569de13472f63960cba3ce4ee4f837696f7a568692142b WatchSource:0}: Error finding container d57b944314cc0eddad569de13472f63960cba3ce4ee4f837696f7a568692142b: Status 404 returned error can't find the container with id d57b944314cc0eddad569de13472f63960cba3ce4ee4f837696f7a568692142b Jan 23 16:35:27 crc kubenswrapper[4718]: I0123 16:35:27.733187 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-rhdfs" event={"ID":"b6c7de8a-ddef-4ca7-96a6-0ee096e226e7","Type":"ContainerStarted","Data":"4c9ddc5466a8ae7e13057ef3630ac720958f82f4232aa2f68e597890ef8f4a26"} Jan 23 16:35:27 crc kubenswrapper[4718]: I0123 16:35:27.737380 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-pn864" event={"ID":"ac038f8d-d946-4154-9b3c-449f6cba7e81","Type":"ContainerStarted","Data":"d57b944314cc0eddad569de13472f63960cba3ce4ee4f837696f7a568692142b"} Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:28.995915 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rhdfs"] Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.049511 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q6p2x"] Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.051811 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.058461 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q6p2x"] Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.123972 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv9cd\" (UniqueName: \"kubernetes.io/projected/a5ce9192-bc15-453c-a45d-c242b273bb74-kube-api-access-hv9cd\") pod \"dnsmasq-dns-666b6646f7-q6p2x\" (UID: \"a5ce9192-bc15-453c-a45d-c242b273bb74\") " pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.124079 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5ce9192-bc15-453c-a45d-c242b273bb74-dns-svc\") pod \"dnsmasq-dns-666b6646f7-q6p2x\" (UID: \"a5ce9192-bc15-453c-a45d-c242b273bb74\") " pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.124108 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ce9192-bc15-453c-a45d-c242b273bb74-config\") pod \"dnsmasq-dns-666b6646f7-q6p2x\" (UID: \"a5ce9192-bc15-453c-a45d-c242b273bb74\") " pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.226761 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv9cd\" (UniqueName: \"kubernetes.io/projected/a5ce9192-bc15-453c-a45d-c242b273bb74-kube-api-access-hv9cd\") pod \"dnsmasq-dns-666b6646f7-q6p2x\" (UID: \"a5ce9192-bc15-453c-a45d-c242b273bb74\") " pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.226902 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5ce9192-bc15-453c-a45d-c242b273bb74-dns-svc\") pod \"dnsmasq-dns-666b6646f7-q6p2x\" (UID: \"a5ce9192-bc15-453c-a45d-c242b273bb74\") " pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.226925 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ce9192-bc15-453c-a45d-c242b273bb74-config\") pod \"dnsmasq-dns-666b6646f7-q6p2x\" (UID: \"a5ce9192-bc15-453c-a45d-c242b273bb74\") " pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.229165 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5ce9192-bc15-453c-a45d-c242b273bb74-dns-svc\") pod \"dnsmasq-dns-666b6646f7-q6p2x\" (UID: \"a5ce9192-bc15-453c-a45d-c242b273bb74\") " pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.229856 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ce9192-bc15-453c-a45d-c242b273bb74-config\") pod \"dnsmasq-dns-666b6646f7-q6p2x\" (UID: \"a5ce9192-bc15-453c-a45d-c242b273bb74\") " pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.251361 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv9cd\" (UniqueName: \"kubernetes.io/projected/a5ce9192-bc15-453c-a45d-c242b273bb74-kube-api-access-hv9cd\") pod \"dnsmasq-dns-666b6646f7-q6p2x\" (UID: \"a5ce9192-bc15-453c-a45d-c242b273bb74\") " pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.389866 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.559408 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pn864"] Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.636533 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mdkn8"] Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.638096 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.648596 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mdkn8"] Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.737670 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d43ef11-3b10-4dfd-981b-4d981157db0e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mdkn8\" (UID: \"4d43ef11-3b10-4dfd-981b-4d981157db0e\") " pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.737765 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vft64\" (UniqueName: \"kubernetes.io/projected/4d43ef11-3b10-4dfd-981b-4d981157db0e-kube-api-access-vft64\") pod \"dnsmasq-dns-57d769cc4f-mdkn8\" (UID: \"4d43ef11-3b10-4dfd-981b-4d981157db0e\") " pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.737881 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d43ef11-3b10-4dfd-981b-4d981157db0e-config\") pod \"dnsmasq-dns-57d769cc4f-mdkn8\" (UID: \"4d43ef11-3b10-4dfd-981b-4d981157db0e\") " pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.839942 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d43ef11-3b10-4dfd-981b-4d981157db0e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mdkn8\" (UID: \"4d43ef11-3b10-4dfd-981b-4d981157db0e\") " pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.840007 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vft64\" (UniqueName: \"kubernetes.io/projected/4d43ef11-3b10-4dfd-981b-4d981157db0e-kube-api-access-vft64\") pod \"dnsmasq-dns-57d769cc4f-mdkn8\" (UID: \"4d43ef11-3b10-4dfd-981b-4d981157db0e\") " pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.840076 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d43ef11-3b10-4dfd-981b-4d981157db0e-config\") pod \"dnsmasq-dns-57d769cc4f-mdkn8\" (UID: \"4d43ef11-3b10-4dfd-981b-4d981157db0e\") " pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.841918 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d43ef11-3b10-4dfd-981b-4d981157db0e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mdkn8\" (UID: \"4d43ef11-3b10-4dfd-981b-4d981157db0e\") " pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.851379 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d43ef11-3b10-4dfd-981b-4d981157db0e-config\") pod \"dnsmasq-dns-57d769cc4f-mdkn8\" (UID: \"4d43ef11-3b10-4dfd-981b-4d981157db0e\") " pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.888603 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vft64\" (UniqueName: \"kubernetes.io/projected/4d43ef11-3b10-4dfd-981b-4d981157db0e-kube-api-access-vft64\") pod \"dnsmasq-dns-57d769cc4f-mdkn8\" (UID: \"4d43ef11-3b10-4dfd-981b-4d981157db0e\") " pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" Jan 23 16:35:29 crc kubenswrapper[4718]: I0123 16:35:29.981345 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.136250 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q6p2x"] Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.210187 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.212212 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.220062 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.220263 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-npv8v" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.220463 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.221439 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.221578 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.221763 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.222606 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.231081 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.255430 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.257426 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.289466 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.351903 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.351981 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.352075 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.352100 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.352324 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.352588 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.352869 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/39c8c979-1e2b-4757-9b14-3526451859e3-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.353032 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/39c8c979-1e2b-4757-9b14-3526451859e3-pod-info\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.353063 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/39c8c979-1e2b-4757-9b14-3526451859e3-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.353093 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sjd4\" (UniqueName: \"kubernetes.io/projected/39c8c979-1e2b-4757-9b14-3526451859e3-kube-api-access-2sjd4\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.353431 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39c8c979-1e2b-4757-9b14-3526451859e3-config-data\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.354365 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c94d7446-3c05-408a-a815-fe9adcb5e785-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.354439 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/39c8c979-1e2b-4757-9b14-3526451859e3-server-conf\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.354516 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.354583 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.354621 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c94d7446-3c05-408a-a815-fe9adcb5e785-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.354731 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.354756 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c94d7446-3c05-408a-a815-fe9adcb5e785-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.355069 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c94d7446-3c05-408a-a815-fe9adcb5e785-config-data\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.355183 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rxfx\" (UniqueName: \"kubernetes.io/projected/c94d7446-3c05-408a-a815-fe9adcb5e785-kube-api-access-7rxfx\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.355248 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.355287 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c94d7446-3c05-408a-a815-fe9adcb5e785-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.357336 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.357595 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.382097 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.457729 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.457991 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c94d7446-3c05-408a-a815-fe9adcb5e785-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.458102 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.458202 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.458327 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.459178 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.459270 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.459369 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.459455 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/39c8c979-1e2b-4757-9b14-3526451859e3-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.459535 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/39c8c979-1e2b-4757-9b14-3526451859e3-pod-info\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.459718 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/39c8c979-1e2b-4757-9b14-3526451859e3-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.459820 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sjd4\" (UniqueName: \"kubernetes.io/projected/39c8c979-1e2b-4757-9b14-3526451859e3-kube-api-access-2sjd4\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.459891 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39c8c979-1e2b-4757-9b14-3526451859e3-config-data\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.459998 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c94d7446-3c05-408a-a815-fe9adcb5e785-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.460075 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/39c8c979-1e2b-4757-9b14-3526451859e3-server-conf\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.460168 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.460250 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.460323 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c94d7446-3c05-408a-a815-fe9adcb5e785-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.460404 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.460467 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c94d7446-3c05-408a-a815-fe9adcb5e785-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.460543 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c94d7446-3c05-408a-a815-fe9adcb5e785-config-data\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.460623 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rxfx\" (UniqueName: \"kubernetes.io/projected/c94d7446-3c05-408a-a815-fe9adcb5e785-kube-api-access-7rxfx\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.461924 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39c8c979-1e2b-4757-9b14-3526451859e3-config-data\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.462404 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.462685 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c94d7446-3c05-408a-a815-fe9adcb5e785-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.462788 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.463028 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.463541 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/39c8c979-1e2b-4757-9b14-3526451859e3-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.463783 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.465993 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c94d7446-3c05-408a-a815-fe9adcb5e785-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.466495 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/39c8c979-1e2b-4757-9b14-3526451859e3-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.466675 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.467486 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/39c8c979-1e2b-4757-9b14-3526451859e3-server-conf\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.467891 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.468812 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c94d7446-3c05-408a-a815-fe9adcb5e785-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.469509 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c94d7446-3c05-408a-a815-fe9adcb5e785-config-data\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.471337 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c94d7446-3c05-408a-a815-fe9adcb5e785-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.472808 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.484271 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.484303 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a1f1a19402a652047b834d193030e18c391170c2a2a97a761d532da758d1e072/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.484924 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.484944 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6c9f31eea737dc8db7a31716687ef2b138714b4086616d0fd0cdf6b05b7e9535/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.512431 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.563066 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.577137 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rxfx\" (UniqueName: \"kubernetes.io/projected/c94d7446-3c05-408a-a815-fe9adcb5e785-kube-api-access-7rxfx\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.590714 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sjd4\" (UniqueName: \"kubernetes.io/projected/39c8c979-1e2b-4757-9b14-3526451859e3-kube-api-access-2sjd4\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.620229 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/39c8c979-1e2b-4757-9b14-3526451859e3-pod-info\") pod \"rabbitmq-server-1\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.661683 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\") pod \"rabbitmq-server-0\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.666375 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b829751-ec51-4363-a796-fbf547cb8b6f-pod-info\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.666416 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvnpl\" (UniqueName: \"kubernetes.io/projected/3b829751-ec51-4363-a796-fbf547cb8b6f-kube-api-access-nvnpl\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.666443 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.666462 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b829751-ec51-4363-a796-fbf547cb8b6f-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.666485 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.666537 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.666560 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b829751-ec51-4363-a796-fbf547cb8b6f-server-conf\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.666578 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.666596 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b829751-ec51-4363-a796-fbf547cb8b6f-config-data\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.666754 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b829751-ec51-4363-a796-fbf547cb8b6f-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.666783 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.746290 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.750817 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.754205 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.754497 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.754604 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.755201 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.755302 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.755781 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-t559l" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.757398 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.760011 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.770698 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b829751-ec51-4363-a796-fbf547cb8b6f-pod-info\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.770780 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvnpl\" (UniqueName: \"kubernetes.io/projected/3b829751-ec51-4363-a796-fbf547cb8b6f-kube-api-access-nvnpl\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.770860 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.770886 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b829751-ec51-4363-a796-fbf547cb8b6f-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.770913 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.771059 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.771087 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b829751-ec51-4363-a796-fbf547cb8b6f-server-conf\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.771123 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.771144 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b829751-ec51-4363-a796-fbf547cb8b6f-config-data\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.771228 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b829751-ec51-4363-a796-fbf547cb8b6f-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.771248 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.772728 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b829751-ec51-4363-a796-fbf547cb8b6f-server-conf\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.773925 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.777255 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.777531 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b829751-ec51-4363-a796-fbf547cb8b6f-config-data\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.777581 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b829751-ec51-4363-a796-fbf547cb8b6f-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.777667 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.781530 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b829751-ec51-4363-a796-fbf547cb8b6f-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.787749 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.788733 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.789833 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4a447bcf10e45026334e95e1687623774169b3df4db8276f7228b00b19c67e0a/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.803354 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvnpl\" (UniqueName: \"kubernetes.io/projected/3b829751-ec51-4363-a796-fbf547cb8b6f-kube-api-access-nvnpl\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.803399 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b829751-ec51-4363-a796-fbf547cb8b6f-pod-info\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.827510 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" event={"ID":"a5ce9192-bc15-453c-a45d-c242b273bb74","Type":"ContainerStarted","Data":"04196eadc0f99a0292420dc6d7b3a8a0c3395e67f56e996a6e990bfa095934de"} Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.831385 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.866320 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\") pod \"rabbitmq-server-2\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.873700 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.873798 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f203e991-61a5-4809-bece-4d99f1e6b53a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.873912 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f203e991-61a5-4809-bece-4d99f1e6b53a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.874035 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.874171 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2h9b\" (UniqueName: \"kubernetes.io/projected/f203e991-61a5-4809-bece-4d99f1e6b53a-kube-api-access-w2h9b\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.874200 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f203e991-61a5-4809-bece-4d99f1e6b53a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.877186 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.877250 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.877285 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.877319 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f203e991-61a5-4809-bece-4d99f1e6b53a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.877340 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f203e991-61a5-4809-bece-4d99f1e6b53a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.896704 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.980284 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.980339 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2h9b\" (UniqueName: \"kubernetes.io/projected/f203e991-61a5-4809-bece-4d99f1e6b53a-kube-api-access-w2h9b\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.980369 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f203e991-61a5-4809-bece-4d99f1e6b53a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.980416 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.980472 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.980498 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.980521 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f203e991-61a5-4809-bece-4d99f1e6b53a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.980552 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f203e991-61a5-4809-bece-4d99f1e6b53a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.980591 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.980619 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f203e991-61a5-4809-bece-4d99f1e6b53a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.980669 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f203e991-61a5-4809-bece-4d99f1e6b53a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.980872 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.982258 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.985017 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.987295 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f203e991-61a5-4809-bece-4d99f1e6b53a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.988190 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f203e991-61a5-4809-bece-4d99f1e6b53a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.988832 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f203e991-61a5-4809-bece-4d99f1e6b53a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.989838 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.989883 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/912fab9ca85c0800faa14298d65b1b37576d7b5c16478735c0355609b4c85a2e/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.990953 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.992178 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f203e991-61a5-4809-bece-4d99f1e6b53a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.994176 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:30 crc kubenswrapper[4718]: I0123 16:35:30.994548 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f203e991-61a5-4809-bece-4d99f1e6b53a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.004929 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2h9b\" (UniqueName: \"kubernetes.io/projected/f203e991-61a5-4809-bece-4d99f1e6b53a-kube-api-access-w2h9b\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.057180 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\") pod \"rabbitmq-cell1-server-0\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.105690 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.130522 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mdkn8"] Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.403708 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.500741 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.629984 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.706438 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.710528 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.716489 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-h5bjl" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.718370 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.720826 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.720879 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.724324 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.736159 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.794260 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.797641 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/592a76d3-742f-47a0-9054-309fb2670fa3-kolla-config\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.797706 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw9t6\" (UniqueName: \"kubernetes.io/projected/592a76d3-742f-47a0-9054-309fb2670fa3-kube-api-access-mw9t6\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.797738 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-948c96b2-36ee-482f-9249-ea15ea1ba833\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-948c96b2-36ee-482f-9249-ea15ea1ba833\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.797800 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/592a76d3-742f-47a0-9054-309fb2670fa3-config-data-default\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.797820 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/592a76d3-742f-47a0-9054-309fb2670fa3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.797889 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/592a76d3-742f-47a0-9054-309fb2670fa3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.797933 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/592a76d3-742f-47a0-9054-309fb2670fa3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.797972 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592a76d3-742f-47a0-9054-309fb2670fa3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.856517 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3b829751-ec51-4363-a796-fbf547cb8b6f","Type":"ContainerStarted","Data":"624a55f51dc269b8bffd32c814f9b8906a3d99860daa9237afe49da6ca09c0f0"} Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.862657 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" event={"ID":"4d43ef11-3b10-4dfd-981b-4d981157db0e","Type":"ContainerStarted","Data":"7a799b6327ed42b8fbc75e39e48e4a8dc8c602869cdeca6f0176107c05a3552f"} Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.869685 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c94d7446-3c05-408a-a815-fe9adcb5e785","Type":"ContainerStarted","Data":"90dc1165db2f99497fd341d8ab217497862f235efe2b028fc5b88b6ae955b3e9"} Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.881585 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"39c8c979-1e2b-4757-9b14-3526451859e3","Type":"ContainerStarted","Data":"8e412a0fa4352cf24a6c3919a234a36006352c8e870b7a5044ccc40c63b392bb"} Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.900919 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/592a76d3-742f-47a0-9054-309fb2670fa3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.900982 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/592a76d3-742f-47a0-9054-309fb2670fa3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.901007 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592a76d3-742f-47a0-9054-309fb2670fa3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.901097 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/592a76d3-742f-47a0-9054-309fb2670fa3-kolla-config\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.901127 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw9t6\" (UniqueName: \"kubernetes.io/projected/592a76d3-742f-47a0-9054-309fb2670fa3-kube-api-access-mw9t6\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.901149 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-948c96b2-36ee-482f-9249-ea15ea1ba833\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-948c96b2-36ee-482f-9249-ea15ea1ba833\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.901192 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/592a76d3-742f-47a0-9054-309fb2670fa3-config-data-default\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.901209 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/592a76d3-742f-47a0-9054-309fb2670fa3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.902240 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/592a76d3-742f-47a0-9054-309fb2670fa3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.903439 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/592a76d3-742f-47a0-9054-309fb2670fa3-kolla-config\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.906130 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/592a76d3-742f-47a0-9054-309fb2670fa3-config-data-default\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.909171 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.909231 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-948c96b2-36ee-482f-9249-ea15ea1ba833\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-948c96b2-36ee-482f-9249-ea15ea1ba833\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bafd5e57702c89b278630fe5801d18e83dad362a4a1adc8cc3f485b5ea569bae/globalmount\"" pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.911167 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/592a76d3-742f-47a0-9054-309fb2670fa3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.920918 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/592a76d3-742f-47a0-9054-309fb2670fa3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.921558 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw9t6\" (UniqueName: \"kubernetes.io/projected/592a76d3-742f-47a0-9054-309fb2670fa3-kube-api-access-mw9t6\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.930928 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592a76d3-742f-47a0-9054-309fb2670fa3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:31 crc kubenswrapper[4718]: I0123 16:35:31.999508 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-948c96b2-36ee-482f-9249-ea15ea1ba833\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-948c96b2-36ee-482f-9249-ea15ea1ba833\") pod \"openstack-galera-0\" (UID: \"592a76d3-742f-47a0-9054-309fb2670fa3\") " pod="openstack/openstack-galera-0" Jan 23 16:35:32 crc kubenswrapper[4718]: I0123 16:35:32.033325 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 16:35:32 crc kubenswrapper[4718]: I0123 16:35:32.557922 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 16:35:32 crc kubenswrapper[4718]: I0123 16:35:32.924961 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 16:35:32 crc kubenswrapper[4718]: I0123 16:35:32.929610 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:32 crc kubenswrapper[4718]: I0123 16:35:32.939255 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 16:35:32 crc kubenswrapper[4718]: I0123 16:35:32.968943 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 23 16:35:32 crc kubenswrapper[4718]: I0123 16:35:32.969160 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-srk2t" Jan 23 16:35:32 crc kubenswrapper[4718]: I0123 16:35:32.969351 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 23 16:35:32 crc kubenswrapper[4718]: I0123 16:35:32.969624 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 23 16:35:32 crc kubenswrapper[4718]: I0123 16:35:32.986604 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f203e991-61a5-4809-bece-4d99f1e6b53a","Type":"ContainerStarted","Data":"8ebf0b0dc15c30e22fca24e739ca6e9a352fdf885d26ba25c891aa2e9eed91d9"} Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.029986 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/91543550-f764-468a-a1e1-980e3d08aa41-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.030043 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-52b08330-74d6-4049-942b-54b738ccc49d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52b08330-74d6-4049-942b-54b738ccc49d\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.030065 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/91543550-f764-468a-a1e1-980e3d08aa41-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.030079 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91543550-f764-468a-a1e1-980e3d08aa41-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.030102 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq6wv\" (UniqueName: \"kubernetes.io/projected/91543550-f764-468a-a1e1-980e3d08aa41-kube-api-access-fq6wv\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.030121 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/91543550-f764-468a-a1e1-980e3d08aa41-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.030159 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91543550-f764-468a-a1e1-980e3d08aa41-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.030182 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/91543550-f764-468a-a1e1-980e3d08aa41-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.133751 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/91543550-f764-468a-a1e1-980e3d08aa41-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.134488 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/91543550-f764-468a-a1e1-980e3d08aa41-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.134586 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-52b08330-74d6-4049-942b-54b738ccc49d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52b08330-74d6-4049-942b-54b738ccc49d\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.134700 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/91543550-f764-468a-a1e1-980e3d08aa41-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.136829 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91543550-f764-468a-a1e1-980e3d08aa41-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.136893 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq6wv\" (UniqueName: \"kubernetes.io/projected/91543550-f764-468a-a1e1-980e3d08aa41-kube-api-access-fq6wv\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.136941 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/91543550-f764-468a-a1e1-980e3d08aa41-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.137108 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91543550-f764-468a-a1e1-980e3d08aa41-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.137160 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/91543550-f764-468a-a1e1-980e3d08aa41-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.138618 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/91543550-f764-468a-a1e1-980e3d08aa41-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.138874 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91543550-f764-468a-a1e1-980e3d08aa41-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.146244 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/91543550-f764-468a-a1e1-980e3d08aa41-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.149349 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.149403 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-52b08330-74d6-4049-942b-54b738ccc49d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52b08330-74d6-4049-942b-54b738ccc49d\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bd40fc166fe716d7bee291a00c493de1325682548b59e00d5c0921e9aa4937d1/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.156400 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91543550-f764-468a-a1e1-980e3d08aa41-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.167820 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/91543550-f764-468a-a1e1-980e3d08aa41-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.182357 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq6wv\" (UniqueName: \"kubernetes.io/projected/91543550-f764-468a-a1e1-980e3d08aa41-kube-api-access-fq6wv\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.289556 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.293423 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.299422 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.302025 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-gmjwq" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.303188 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.303614 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.321320 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-52b08330-74d6-4049-942b-54b738ccc49d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52b08330-74d6-4049-942b-54b738ccc49d\") pod \"openstack-cell1-galera-0\" (UID: \"91543550-f764-468a-a1e1-980e3d08aa41\") " pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.390114 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffd9d6ca-1e5b-4102-8b5b-664ebd967619-config-data\") pod \"memcached-0\" (UID: \"ffd9d6ca-1e5b-4102-8b5b-664ebd967619\") " pod="openstack/memcached-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.390248 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-599f9\" (UniqueName: \"kubernetes.io/projected/ffd9d6ca-1e5b-4102-8b5b-664ebd967619-kube-api-access-599f9\") pod \"memcached-0\" (UID: \"ffd9d6ca-1e5b-4102-8b5b-664ebd967619\") " pod="openstack/memcached-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.390390 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd9d6ca-1e5b-4102-8b5b-664ebd967619-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ffd9d6ca-1e5b-4102-8b5b-664ebd967619\") " pod="openstack/memcached-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.390450 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffd9d6ca-1e5b-4102-8b5b-664ebd967619-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ffd9d6ca-1e5b-4102-8b5b-664ebd967619\") " pod="openstack/memcached-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.390481 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffd9d6ca-1e5b-4102-8b5b-664ebd967619-kolla-config\") pod \"memcached-0\" (UID: \"ffd9d6ca-1e5b-4102-8b5b-664ebd967619\") " pod="openstack/memcached-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.492919 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-599f9\" (UniqueName: \"kubernetes.io/projected/ffd9d6ca-1e5b-4102-8b5b-664ebd967619-kube-api-access-599f9\") pod \"memcached-0\" (UID: \"ffd9d6ca-1e5b-4102-8b5b-664ebd967619\") " pod="openstack/memcached-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.493100 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd9d6ca-1e5b-4102-8b5b-664ebd967619-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ffd9d6ca-1e5b-4102-8b5b-664ebd967619\") " pod="openstack/memcached-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.493174 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffd9d6ca-1e5b-4102-8b5b-664ebd967619-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ffd9d6ca-1e5b-4102-8b5b-664ebd967619\") " pod="openstack/memcached-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.493204 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffd9d6ca-1e5b-4102-8b5b-664ebd967619-kolla-config\") pod \"memcached-0\" (UID: \"ffd9d6ca-1e5b-4102-8b5b-664ebd967619\") " pod="openstack/memcached-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.493256 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffd9d6ca-1e5b-4102-8b5b-664ebd967619-config-data\") pod \"memcached-0\" (UID: \"ffd9d6ca-1e5b-4102-8b5b-664ebd967619\") " pod="openstack/memcached-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.494364 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffd9d6ca-1e5b-4102-8b5b-664ebd967619-config-data\") pod \"memcached-0\" (UID: \"ffd9d6ca-1e5b-4102-8b5b-664ebd967619\") " pod="openstack/memcached-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.494601 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffd9d6ca-1e5b-4102-8b5b-664ebd967619-kolla-config\") pod \"memcached-0\" (UID: \"ffd9d6ca-1e5b-4102-8b5b-664ebd967619\") " pod="openstack/memcached-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.500842 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffd9d6ca-1e5b-4102-8b5b-664ebd967619-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ffd9d6ca-1e5b-4102-8b5b-664ebd967619\") " pod="openstack/memcached-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.508804 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd9d6ca-1e5b-4102-8b5b-664ebd967619-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ffd9d6ca-1e5b-4102-8b5b-664ebd967619\") " pod="openstack/memcached-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.520493 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-599f9\" (UniqueName: \"kubernetes.io/projected/ffd9d6ca-1e5b-4102-8b5b-664ebd967619-kube-api-access-599f9\") pod \"memcached-0\" (UID: \"ffd9d6ca-1e5b-4102-8b5b-664ebd967619\") " pod="openstack/memcached-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.624890 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 16:35:33 crc kubenswrapper[4718]: I0123 16:35:33.630843 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:35.505734 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:35.515808 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:35.517662 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:35.520199 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-g2prq" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:35.677420 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m87f4\" (UniqueName: \"kubernetes.io/projected/ee115b8e-40cf-4641-acd3-13132054a9b7-kube-api-access-m87f4\") pod \"kube-state-metrics-0\" (UID: \"ee115b8e-40cf-4641-acd3-13132054a9b7\") " pod="openstack/kube-state-metrics-0" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:35.792098 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m87f4\" (UniqueName: \"kubernetes.io/projected/ee115b8e-40cf-4641-acd3-13132054a9b7-kube-api-access-m87f4\") pod \"kube-state-metrics-0\" (UID: \"ee115b8e-40cf-4641-acd3-13132054a9b7\") " pod="openstack/kube-state-metrics-0" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:35.846409 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m87f4\" (UniqueName: \"kubernetes.io/projected/ee115b8e-40cf-4641-acd3-13132054a9b7-kube-api-access-m87f4\") pod \"kube-state-metrics-0\" (UID: \"ee115b8e-40cf-4641-acd3-13132054a9b7\") " pod="openstack/kube-state-metrics-0" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:35.856036 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.409517 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-js85h"] Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.411032 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-js85h" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.416723 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-jkbtv" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.416569 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.424372 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-js85h"] Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.431737 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/206601f2-166b-4dcf-9f9b-77a64e3f6c5b-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-js85h\" (UID: \"206601f2-166b-4dcf-9f9b-77a64e3f6c5b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-js85h" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.431806 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5lbc\" (UniqueName: \"kubernetes.io/projected/206601f2-166b-4dcf-9f9b-77a64e3f6c5b-kube-api-access-v5lbc\") pod \"observability-ui-dashboards-66cbf594b5-js85h\" (UID: \"206601f2-166b-4dcf-9f9b-77a64e3f6c5b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-js85h" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.534458 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/206601f2-166b-4dcf-9f9b-77a64e3f6c5b-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-js85h\" (UID: \"206601f2-166b-4dcf-9f9b-77a64e3f6c5b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-js85h" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.534532 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5lbc\" (UniqueName: \"kubernetes.io/projected/206601f2-166b-4dcf-9f9b-77a64e3f6c5b-kube-api-access-v5lbc\") pod \"observability-ui-dashboards-66cbf594b5-js85h\" (UID: \"206601f2-166b-4dcf-9f9b-77a64e3f6c5b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-js85h" Jan 23 16:35:36 crc kubenswrapper[4718]: E0123 16:35:36.535120 4718 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Jan 23 16:35:36 crc kubenswrapper[4718]: E0123 16:35:36.535229 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/206601f2-166b-4dcf-9f9b-77a64e3f6c5b-serving-cert podName:206601f2-166b-4dcf-9f9b-77a64e3f6c5b nodeName:}" failed. No retries permitted until 2026-01-23 16:35:37.035203657 +0000 UTC m=+1138.182445648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/206601f2-166b-4dcf-9f9b-77a64e3f6c5b-serving-cert") pod "observability-ui-dashboards-66cbf594b5-js85h" (UID: "206601f2-166b-4dcf-9f9b-77a64e3f6c5b") : secret "observability-ui-dashboards" not found Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.565737 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5lbc\" (UniqueName: \"kubernetes.io/projected/206601f2-166b-4dcf-9f9b-77a64e3f6c5b-kube-api-access-v5lbc\") pod \"observability-ui-dashboards-66cbf594b5-js85h\" (UID: \"206601f2-166b-4dcf-9f9b-77a64e3f6c5b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-js85h" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.770806 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-bf5998ddd-nzjmh"] Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.772529 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.780080 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.783130 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.785836 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.786049 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.786150 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.789592 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.790099 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-xl2lt" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.790219 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.790970 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.795063 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bf5998ddd-nzjmh"] Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.802282 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.849945 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.952062 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/759cb0cb-1122-43e0-800f-2dc054632802-service-ca\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.952117 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr2fg\" (UniqueName: \"kubernetes.io/projected/759cb0cb-1122-43e0-800f-2dc054632802-kube-api-access-fr2fg\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.952207 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/759cb0cb-1122-43e0-800f-2dc054632802-console-serving-cert\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.952232 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77qz9\" (UniqueName: \"kubernetes.io/projected/148cf8fd-f7c0-4355-a54b-ff8052293bc4-kube-api-access-77qz9\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.952284 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/148cf8fd-f7c0-4355-a54b-ff8052293bc4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.954007 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/759cb0cb-1122-43e0-800f-2dc054632802-oauth-serving-cert\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.955918 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/148cf8fd-f7c0-4355-a54b-ff8052293bc4-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.955974 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/759cb0cb-1122-43e0-800f-2dc054632802-trusted-ca-bundle\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.956118 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/148cf8fd-f7c0-4355-a54b-ff8052293bc4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.956171 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/148cf8fd-f7c0-4355-a54b-ff8052293bc4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.956209 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/148cf8fd-f7c0-4355-a54b-ff8052293bc4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.956253 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/759cb0cb-1122-43e0-800f-2dc054632802-console-oauth-config\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.957034 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/148cf8fd-f7c0-4355-a54b-ff8052293bc4-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.957112 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/148cf8fd-f7c0-4355-a54b-ff8052293bc4-config\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.957144 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/759cb0cb-1122-43e0-800f-2dc054632802-console-config\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.957223 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:36 crc kubenswrapper[4718]: I0123 16:35:36.957282 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/148cf8fd-f7c0-4355-a54b-ff8052293bc4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.059492 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/759cb0cb-1122-43e0-800f-2dc054632802-service-ca\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.059540 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr2fg\" (UniqueName: \"kubernetes.io/projected/759cb0cb-1122-43e0-800f-2dc054632802-kube-api-access-fr2fg\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.059580 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/206601f2-166b-4dcf-9f9b-77a64e3f6c5b-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-js85h\" (UID: \"206601f2-166b-4dcf-9f9b-77a64e3f6c5b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-js85h" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.059600 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/759cb0cb-1122-43e0-800f-2dc054632802-console-serving-cert\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.059618 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77qz9\" (UniqueName: \"kubernetes.io/projected/148cf8fd-f7c0-4355-a54b-ff8052293bc4-kube-api-access-77qz9\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.059659 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/148cf8fd-f7c0-4355-a54b-ff8052293bc4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.059682 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/759cb0cb-1122-43e0-800f-2dc054632802-oauth-serving-cert\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.059707 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/148cf8fd-f7c0-4355-a54b-ff8052293bc4-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.059726 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/759cb0cb-1122-43e0-800f-2dc054632802-trusted-ca-bundle\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.059764 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/148cf8fd-f7c0-4355-a54b-ff8052293bc4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.059792 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/148cf8fd-f7c0-4355-a54b-ff8052293bc4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.059832 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/148cf8fd-f7c0-4355-a54b-ff8052293bc4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.059852 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/759cb0cb-1122-43e0-800f-2dc054632802-console-oauth-config\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.059869 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/148cf8fd-f7c0-4355-a54b-ff8052293bc4-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.059888 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/148cf8fd-f7c0-4355-a54b-ff8052293bc4-config\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.059904 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/759cb0cb-1122-43e0-800f-2dc054632802-console-config\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.060832 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/759cb0cb-1122-43e0-800f-2dc054632802-service-ca\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.061646 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.061771 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/148cf8fd-f7c0-4355-a54b-ff8052293bc4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.063295 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/759cb0cb-1122-43e0-800f-2dc054632802-oauth-serving-cert\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.064534 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/148cf8fd-f7c0-4355-a54b-ff8052293bc4-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.064608 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/759cb0cb-1122-43e0-800f-2dc054632802-console-config\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.067790 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/759cb0cb-1122-43e0-800f-2dc054632802-console-serving-cert\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.072600 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/759cb0cb-1122-43e0-800f-2dc054632802-console-oauth-config\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.073443 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/148cf8fd-f7c0-4355-a54b-ff8052293bc4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.074965 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/206601f2-166b-4dcf-9f9b-77a64e3f6c5b-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-js85h\" (UID: \"206601f2-166b-4dcf-9f9b-77a64e3f6c5b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-js85h" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.081934 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/148cf8fd-f7c0-4355-a54b-ff8052293bc4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.086159 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/759cb0cb-1122-43e0-800f-2dc054632802-trusted-ca-bundle\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.087213 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr2fg\" (UniqueName: \"kubernetes.io/projected/759cb0cb-1122-43e0-800f-2dc054632802-kube-api-access-fr2fg\") pod \"console-bf5998ddd-nzjmh\" (UID: \"759cb0cb-1122-43e0-800f-2dc054632802\") " pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.088667 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/148cf8fd-f7c0-4355-a54b-ff8052293bc4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.089243 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/148cf8fd-f7c0-4355-a54b-ff8052293bc4-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.089325 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.089354 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b6d1c620b879242608879e94752b6e1bafbbd2b66e575591b8e49353c60cb357/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.092829 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/148cf8fd-f7c0-4355-a54b-ff8052293bc4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.093158 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/148cf8fd-f7c0-4355-a54b-ff8052293bc4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.094194 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/148cf8fd-f7c0-4355-a54b-ff8052293bc4-config\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.094325 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77qz9\" (UniqueName: \"kubernetes.io/projected/148cf8fd-f7c0-4355-a54b-ff8052293bc4-kube-api-access-77qz9\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.118219 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.144410 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\") pod \"prometheus-metric-storage-0\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.343230 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-js85h" Jan 23 16:35:37 crc kubenswrapper[4718]: I0123 16:35:37.431761 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 16:35:38 crc kubenswrapper[4718]: I0123 16:35:38.904872 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 16:35:38 crc kubenswrapper[4718]: I0123 16:35:38.907764 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:38 crc kubenswrapper[4718]: I0123 16:35:38.911597 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 23 16:35:38 crc kubenswrapper[4718]: I0123 16:35:38.911710 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 23 16:35:38 crc kubenswrapper[4718]: I0123 16:35:38.911771 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-54gnj" Jan 23 16:35:38 crc kubenswrapper[4718]: I0123 16:35:38.911784 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 23 16:35:38 crc kubenswrapper[4718]: I0123 16:35:38.913049 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 23 16:35:38 crc kubenswrapper[4718]: I0123 16:35:38.913906 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.030879 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28af9db0-905d-46cc-8ab9-887e0f58ee9b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.030990 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8f92df2e-e9e2-4ce5-b339-87cbca0453df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8f92df2e-e9e2-4ce5-b339-87cbca0453df\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.031037 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65hpl\" (UniqueName: \"kubernetes.io/projected/28af9db0-905d-46cc-8ab9-887e0f58ee9b-kube-api-access-65hpl\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.031068 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/28af9db0-905d-46cc-8ab9-887e0f58ee9b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.031175 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/28af9db0-905d-46cc-8ab9-887e0f58ee9b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.031205 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28af9db0-905d-46cc-8ab9-887e0f58ee9b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.031289 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28af9db0-905d-46cc-8ab9-887e0f58ee9b-config\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.031308 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/28af9db0-905d-46cc-8ab9-887e0f58ee9b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.137016 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/28af9db0-905d-46cc-8ab9-887e0f58ee9b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.137088 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28af9db0-905d-46cc-8ab9-887e0f58ee9b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.137161 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28af9db0-905d-46cc-8ab9-887e0f58ee9b-config\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.137188 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/28af9db0-905d-46cc-8ab9-887e0f58ee9b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.137211 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28af9db0-905d-46cc-8ab9-887e0f58ee9b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.137291 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8f92df2e-e9e2-4ce5-b339-87cbca0453df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8f92df2e-e9e2-4ce5-b339-87cbca0453df\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.137342 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65hpl\" (UniqueName: \"kubernetes.io/projected/28af9db0-905d-46cc-8ab9-887e0f58ee9b-kube-api-access-65hpl\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.137383 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/28af9db0-905d-46cc-8ab9-887e0f58ee9b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.137809 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/28af9db0-905d-46cc-8ab9-887e0f58ee9b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.141486 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.141694 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.141885 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.142040 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.152102 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28af9db0-905d-46cc-8ab9-887e0f58ee9b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.152984 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.153020 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8f92df2e-e9e2-4ce5-b339-87cbca0453df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8f92df2e-e9e2-4ce5-b339-87cbca0453df\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4fa57dfd9254b56f8a901e609a3d7c3d09210e3f2d245fff70e419d1841bc069/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.154598 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/28af9db0-905d-46cc-8ab9-887e0f58ee9b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.155284 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28af9db0-905d-46cc-8ab9-887e0f58ee9b-config\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.158874 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28af9db0-905d-46cc-8ab9-887e0f58ee9b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.160865 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/28af9db0-905d-46cc-8ab9-887e0f58ee9b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.164313 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65hpl\" (UniqueName: \"kubernetes.io/projected/28af9db0-905d-46cc-8ab9-887e0f58ee9b-kube-api-access-65hpl\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.229533 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8f92df2e-e9e2-4ce5-b339-87cbca0453df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8f92df2e-e9e2-4ce5-b339-87cbca0453df\") pod \"ovsdbserver-sb-0\" (UID: \"28af9db0-905d-46cc-8ab9-887e0f58ee9b\") " pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.253280 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-54gnj" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.256463 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.995697 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-c9rfg"] Jan 23 16:35:39 crc kubenswrapper[4718]: I0123 16:35:39.997239 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.000788 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.001932 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.002252 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-4v7sr" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.009195 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-f2xs2"] Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.012132 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.018538 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c9rfg"] Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.037943 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-f2xs2"] Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.177424 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a6481072-75b7-4b67-94a4-94041ef225f6-var-run\") pod \"ovn-controller-ovs-f2xs2\" (UID: \"a6481072-75b7-4b67-94a4-94041ef225f6\") " pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.177490 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6481072-75b7-4b67-94a4-94041ef225f6-scripts\") pod \"ovn-controller-ovs-f2xs2\" (UID: \"a6481072-75b7-4b67-94a4-94041ef225f6\") " pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.177514 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-var-run\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.177531 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a6481072-75b7-4b67-94a4-94041ef225f6-var-log\") pod \"ovn-controller-ovs-f2xs2\" (UID: \"a6481072-75b7-4b67-94a4-94041ef225f6\") " pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.177607 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-var-log-ovn\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.177668 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6rfq\" (UniqueName: \"kubernetes.io/projected/a6481072-75b7-4b67-94a4-94041ef225f6-kube-api-access-j6rfq\") pod \"ovn-controller-ovs-f2xs2\" (UID: \"a6481072-75b7-4b67-94a4-94041ef225f6\") " pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.177728 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-var-run-ovn\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.177744 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-combined-ca-bundle\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.177761 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a6481072-75b7-4b67-94a4-94041ef225f6-var-lib\") pod \"ovn-controller-ovs-f2xs2\" (UID: \"a6481072-75b7-4b67-94a4-94041ef225f6\") " pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.177782 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqpsj\" (UniqueName: \"kubernetes.io/projected/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-kube-api-access-qqpsj\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.177808 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-scripts\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.177829 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-ovn-controller-tls-certs\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.177843 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a6481072-75b7-4b67-94a4-94041ef225f6-etc-ovs\") pod \"ovn-controller-ovs-f2xs2\" (UID: \"a6481072-75b7-4b67-94a4-94041ef225f6\") " pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.279402 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-var-run-ovn\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.279920 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-combined-ca-bundle\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.279952 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a6481072-75b7-4b67-94a4-94041ef225f6-var-lib\") pod \"ovn-controller-ovs-f2xs2\" (UID: \"a6481072-75b7-4b67-94a4-94041ef225f6\") " pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.279982 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqpsj\" (UniqueName: \"kubernetes.io/projected/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-kube-api-access-qqpsj\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.280032 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-scripts\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.280061 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-ovn-controller-tls-certs\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.282525 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a6481072-75b7-4b67-94a4-94041ef225f6-etc-ovs\") pod \"ovn-controller-ovs-f2xs2\" (UID: \"a6481072-75b7-4b67-94a4-94041ef225f6\") " pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.282560 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a6481072-75b7-4b67-94a4-94041ef225f6-var-run\") pod \"ovn-controller-ovs-f2xs2\" (UID: \"a6481072-75b7-4b67-94a4-94041ef225f6\") " pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.282596 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6481072-75b7-4b67-94a4-94041ef225f6-scripts\") pod \"ovn-controller-ovs-f2xs2\" (UID: \"a6481072-75b7-4b67-94a4-94041ef225f6\") " pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.282665 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-var-run\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.282686 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a6481072-75b7-4b67-94a4-94041ef225f6-var-log\") pod \"ovn-controller-ovs-f2xs2\" (UID: \"a6481072-75b7-4b67-94a4-94041ef225f6\") " pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.282827 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-var-log-ovn\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.282879 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6rfq\" (UniqueName: \"kubernetes.io/projected/a6481072-75b7-4b67-94a4-94041ef225f6-kube-api-access-j6rfq\") pod \"ovn-controller-ovs-f2xs2\" (UID: \"a6481072-75b7-4b67-94a4-94041ef225f6\") " pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.280621 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a6481072-75b7-4b67-94a4-94041ef225f6-var-lib\") pod \"ovn-controller-ovs-f2xs2\" (UID: \"a6481072-75b7-4b67-94a4-94041ef225f6\") " pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.283866 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a6481072-75b7-4b67-94a4-94041ef225f6-etc-ovs\") pod \"ovn-controller-ovs-f2xs2\" (UID: \"a6481072-75b7-4b67-94a4-94041ef225f6\") " pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.281177 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-var-run-ovn\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.284023 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a6481072-75b7-4b67-94a4-94041ef225f6-var-run\") pod \"ovn-controller-ovs-f2xs2\" (UID: \"a6481072-75b7-4b67-94a4-94041ef225f6\") " pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.282484 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-scripts\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.284218 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-var-run\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.284880 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a6481072-75b7-4b67-94a4-94041ef225f6-var-log\") pod \"ovn-controller-ovs-f2xs2\" (UID: \"a6481072-75b7-4b67-94a4-94041ef225f6\") " pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.285988 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-var-log-ovn\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.287766 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6481072-75b7-4b67-94a4-94041ef225f6-scripts\") pod \"ovn-controller-ovs-f2xs2\" (UID: \"a6481072-75b7-4b67-94a4-94041ef225f6\") " pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.301393 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-ovn-controller-tls-certs\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.301526 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-combined-ca-bundle\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.301897 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqpsj\" (UniqueName: \"kubernetes.io/projected/d2e5ad0b-04cf-49a9-badc-9e3184385c5b-kube-api-access-qqpsj\") pod \"ovn-controller-c9rfg\" (UID: \"d2e5ad0b-04cf-49a9-badc-9e3184385c5b\") " pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.304017 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6rfq\" (UniqueName: \"kubernetes.io/projected/a6481072-75b7-4b67-94a4-94041ef225f6-kube-api-access-j6rfq\") pod \"ovn-controller-ovs-f2xs2\" (UID: \"a6481072-75b7-4b67-94a4-94041ef225f6\") " pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.304112 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"592a76d3-742f-47a0-9054-309fb2670fa3","Type":"ContainerStarted","Data":"88269d6ce07266122944491ae88c051bed23fc941604fdc88a7fa960669515c6"} Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.318532 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9rfg" Jan 23 16:35:40 crc kubenswrapper[4718]: I0123 16:35:40.353740 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.720874 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.723624 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.726573 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-f5j45" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.726825 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.726989 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.736664 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.738974 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.852285 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf53eabe-609c-471c-ae7e-ca9fb950f86e-config\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.852361 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6496de42-3f45-4f35-936b-12731d7c0c91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6496de42-3f45-4f35-936b-12731d7c0c91\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.852396 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cf53eabe-609c-471c-ae7e-ca9fb950f86e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.852486 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8snkp\" (UniqueName: \"kubernetes.io/projected/cf53eabe-609c-471c-ae7e-ca9fb950f86e-kube-api-access-8snkp\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.852854 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf53eabe-609c-471c-ae7e-ca9fb950f86e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.852967 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf53eabe-609c-471c-ae7e-ca9fb950f86e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.853123 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf53eabe-609c-471c-ae7e-ca9fb950f86e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.853170 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf53eabe-609c-471c-ae7e-ca9fb950f86e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.955734 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf53eabe-609c-471c-ae7e-ca9fb950f86e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.955802 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf53eabe-609c-471c-ae7e-ca9fb950f86e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.955876 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf53eabe-609c-471c-ae7e-ca9fb950f86e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.955904 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf53eabe-609c-471c-ae7e-ca9fb950f86e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.955951 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf53eabe-609c-471c-ae7e-ca9fb950f86e-config\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.955983 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6496de42-3f45-4f35-936b-12731d7c0c91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6496de42-3f45-4f35-936b-12731d7c0c91\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.956009 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cf53eabe-609c-471c-ae7e-ca9fb950f86e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.956088 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8snkp\" (UniqueName: \"kubernetes.io/projected/cf53eabe-609c-471c-ae7e-ca9fb950f86e-kube-api-access-8snkp\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.957514 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cf53eabe-609c-471c-ae7e-ca9fb950f86e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.957809 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf53eabe-609c-471c-ae7e-ca9fb950f86e-config\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.960922 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf53eabe-609c-471c-ae7e-ca9fb950f86e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.962879 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf53eabe-609c-471c-ae7e-ca9fb950f86e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.963208 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf53eabe-609c-471c-ae7e-ca9fb950f86e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.964788 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.964827 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6496de42-3f45-4f35-936b-12731d7c0c91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6496de42-3f45-4f35-936b-12731d7c0c91\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8638570e872c9cf5f1cfb600553027271ada7b69ba305fd4190e7969ede902a1/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.965522 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf53eabe-609c-471c-ae7e-ca9fb950f86e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:42 crc kubenswrapper[4718]: I0123 16:35:42.987454 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8snkp\" (UniqueName: \"kubernetes.io/projected/cf53eabe-609c-471c-ae7e-ca9fb950f86e-kube-api-access-8snkp\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:43 crc kubenswrapper[4718]: I0123 16:35:43.004478 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6496de42-3f45-4f35-936b-12731d7c0c91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6496de42-3f45-4f35-936b-12731d7c0c91\") pod \"ovsdbserver-nb-0\" (UID: \"cf53eabe-609c-471c-ae7e-ca9fb950f86e\") " pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:43 crc kubenswrapper[4718]: I0123 16:35:43.061894 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 16:35:44 crc kubenswrapper[4718]: I0123 16:35:44.850149 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bf5998ddd-nzjmh"] Jan 23 16:35:49 crc kubenswrapper[4718]: I0123 16:35:49.429602 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bf5998ddd-nzjmh" event={"ID":"759cb0cb-1122-43e0-800f-2dc054632802","Type":"ContainerStarted","Data":"e2db74a428214cf1034bc9f140230576a8e4e5851f0a427724c75e630c217605"} Jan 23 16:35:54 crc kubenswrapper[4718]: E0123 16:35:54.764186 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 23 16:35:54 crc kubenswrapper[4718]: E0123 16:35:54.765600 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cwvch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-pn864_openstack(ac038f8d-d946-4154-9b3c-449f6cba7e81): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:35:54 crc kubenswrapper[4718]: E0123 16:35:54.767839 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-pn864" podUID="ac038f8d-d946-4154-9b3c-449f6cba7e81" Jan 23 16:35:54 crc kubenswrapper[4718]: E0123 16:35:54.808665 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 23 16:35:54 crc kubenswrapper[4718]: E0123 16:35:54.808840 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vft64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-mdkn8_openstack(4d43ef11-3b10-4dfd-981b-4d981157db0e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:35:54 crc kubenswrapper[4718]: E0123 16:35:54.810001 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" podUID="4d43ef11-3b10-4dfd-981b-4d981157db0e" Jan 23 16:35:54 crc kubenswrapper[4718]: E0123 16:35:54.835834 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 23 16:35:54 crc kubenswrapper[4718]: E0123 16:35:54.836023 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hv9cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-q6p2x_openstack(a5ce9192-bc15-453c-a45d-c242b273bb74): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:35:54 crc kubenswrapper[4718]: E0123 16:35:54.837390 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" podUID="a5ce9192-bc15-453c-a45d-c242b273bb74" Jan 23 16:35:55 crc kubenswrapper[4718]: I0123 16:35:55.197690 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-js85h"] Jan 23 16:35:55 crc kubenswrapper[4718]: I0123 16:35:55.322164 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 16:35:55 crc kubenswrapper[4718]: I0123 16:35:55.334493 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 16:35:55 crc kubenswrapper[4718]: I0123 16:35:55.487339 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-f2xs2"] Jan 23 16:35:55 crc kubenswrapper[4718]: E0123 16:35:55.497327 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" podUID="a5ce9192-bc15-453c-a45d-c242b273bb74" Jan 23 16:35:55 crc kubenswrapper[4718]: E0123 16:35:55.497534 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" podUID="4d43ef11-3b10-4dfd-981b-4d981157db0e" Jan 23 16:35:57 crc kubenswrapper[4718]: E0123 16:35:57.120603 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 23 16:35:57 crc kubenswrapper[4718]: E0123 16:35:57.121286 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mw9t6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(592a76d3-742f-47a0-9054-309fb2670fa3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:35:57 crc kubenswrapper[4718]: E0123 16:35:57.122467 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="592a76d3-742f-47a0-9054-309fb2670fa3" Jan 23 16:35:57 crc kubenswrapper[4718]: E0123 16:35:57.156781 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 23 16:35:57 crc kubenswrapper[4718]: E0123 16:35:57.157038 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m72nn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-rhdfs_openstack(b6c7de8a-ddef-4ca7-96a6-0ee096e226e7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:35:57 crc kubenswrapper[4718]: E0123 16:35:57.159654 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-rhdfs" podUID="b6c7de8a-ddef-4ca7-96a6-0ee096e226e7" Jan 23 16:35:57 crc kubenswrapper[4718]: W0123 16:35:57.227398 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6481072_75b7_4b67_94a4_94041ef225f6.slice/crio-2ae9d7c5495c65953cab1a301f12772096b046f1e0944f774a9e96b78aab30b4 WatchSource:0}: Error finding container 2ae9d7c5495c65953cab1a301f12772096b046f1e0944f774a9e96b78aab30b4: Status 404 returned error can't find the container with id 2ae9d7c5495c65953cab1a301f12772096b046f1e0944f774a9e96b78aab30b4 Jan 23 16:35:57 crc kubenswrapper[4718]: W0123 16:35:57.231310 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91543550_f764_468a_a1e1_980e3d08aa41.slice/crio-3360f119e684ba9164ca67804aecb3464ef6c22436eb17eadd683526ae4ec704 WatchSource:0}: Error finding container 3360f119e684ba9164ca67804aecb3464ef6c22436eb17eadd683526ae4ec704: Status 404 returned error can't find the container with id 3360f119e684ba9164ca67804aecb3464ef6c22436eb17eadd683526ae4ec704 Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.428616 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pn864" Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.518476 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pn864" Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.518533 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-pn864" event={"ID":"ac038f8d-d946-4154-9b3c-449f6cba7e81","Type":"ContainerDied","Data":"d57b944314cc0eddad569de13472f63960cba3ce4ee4f837696f7a568692142b"} Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.522239 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"91543550-f764-468a-a1e1-980e3d08aa41","Type":"ContainerStarted","Data":"3360f119e684ba9164ca67804aecb3464ef6c22436eb17eadd683526ae4ec704"} Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.524190 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ffd9d6ca-1e5b-4102-8b5b-664ebd967619","Type":"ContainerStarted","Data":"a24c90f078aa36e8ce50cbbed052912dc42c148e9199a7c45b8deafeaf34e530"} Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.540148 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac038f8d-d946-4154-9b3c-449f6cba7e81-config\") pod \"ac038f8d-d946-4154-9b3c-449f6cba7e81\" (UID: \"ac038f8d-d946-4154-9b3c-449f6cba7e81\") " Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.540218 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwvch\" (UniqueName: \"kubernetes.io/projected/ac038f8d-d946-4154-9b3c-449f6cba7e81-kube-api-access-cwvch\") pod \"ac038f8d-d946-4154-9b3c-449f6cba7e81\" (UID: \"ac038f8d-d946-4154-9b3c-449f6cba7e81\") " Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.540347 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac038f8d-d946-4154-9b3c-449f6cba7e81-dns-svc\") pod \"ac038f8d-d946-4154-9b3c-449f6cba7e81\" (UID: \"ac038f8d-d946-4154-9b3c-449f6cba7e81\") " Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.541454 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac038f8d-d946-4154-9b3c-449f6cba7e81-config" (OuterVolumeSpecName: "config") pod "ac038f8d-d946-4154-9b3c-449f6cba7e81" (UID: "ac038f8d-d946-4154-9b3c-449f6cba7e81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.541645 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac038f8d-d946-4154-9b3c-449f6cba7e81-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac038f8d-d946-4154-9b3c-449f6cba7e81" (UID: "ac038f8d-d946-4154-9b3c-449f6cba7e81"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.543754 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f2xs2" event={"ID":"a6481072-75b7-4b67-94a4-94041ef225f6","Type":"ContainerStarted","Data":"2ae9d7c5495c65953cab1a301f12772096b046f1e0944f774a9e96b78aab30b4"} Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.547441 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac038f8d-d946-4154-9b3c-449f6cba7e81-kube-api-access-cwvch" (OuterVolumeSpecName: "kube-api-access-cwvch") pod "ac038f8d-d946-4154-9b3c-449f6cba7e81" (UID: "ac038f8d-d946-4154-9b3c-449f6cba7e81"). InnerVolumeSpecName "kube-api-access-cwvch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.547916 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-js85h" event={"ID":"206601f2-166b-4dcf-9f9b-77a64e3f6c5b","Type":"ContainerStarted","Data":"367119f90281caa88b644e29f3f58e3f617d8303553f322e7113ee935e9ed480"} Jan 23 16:35:57 crc kubenswrapper[4718]: E0123 16:35:57.550235 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="592a76d3-742f-47a0-9054-309fb2670fa3" Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.569491 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-bf5998ddd-nzjmh" podStartSLOduration=21.569466477 podStartE2EDuration="21.569466477s" podCreationTimestamp="2026-01-23 16:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:35:57.555604641 +0000 UTC m=+1158.702846642" watchObservedRunningTime="2026-01-23 16:35:57.569466477 +0000 UTC m=+1158.716708468" Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.643502 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac038f8d-d946-4154-9b3c-449f6cba7e81-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.643557 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwvch\" (UniqueName: \"kubernetes.io/projected/ac038f8d-d946-4154-9b3c-449f6cba7e81-kube-api-access-cwvch\") on node \"crc\" DevicePath \"\"" Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.643569 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac038f8d-d946-4154-9b3c-449f6cba7e81-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.786378 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 16:35:57 crc kubenswrapper[4718]: W0123 16:35:57.793950 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28af9db0_905d_46cc_8ab9_887e0f58ee9b.slice/crio-4c78b8f684b3a7831db74b952fc86d8ccf053b86afcadc2fb903dae0ecb259a4 WatchSource:0}: Error finding container 4c78b8f684b3a7831db74b952fc86d8ccf053b86afcadc2fb903dae0ecb259a4: Status 404 returned error can't find the container with id 4c78b8f684b3a7831db74b952fc86d8ccf053b86afcadc2fb903dae0ecb259a4 Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.902327 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pn864"] Jan 23 16:35:57 crc kubenswrapper[4718]: I0123 16:35:57.920271 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pn864"] Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.050898 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c9rfg"] Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.100093 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.119681 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 16:35:58 crc kubenswrapper[4718]: W0123 16:35:58.139930 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee115b8e_40cf_4641_acd3_13132054a9b7.slice/crio-d50f847a2901d4df7835dc73d0cf3443854e05e4ecf4cfe32a2b8f9a40619fd4 WatchSource:0}: Error finding container d50f847a2901d4df7835dc73d0cf3443854e05e4ecf4cfe32a2b8f9a40619fd4: Status 404 returned error can't find the container with id d50f847a2901d4df7835dc73d0cf3443854e05e4ecf4cfe32a2b8f9a40619fd4 Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.154365 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.535185 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rhdfs" Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.561705 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9rfg" event={"ID":"d2e5ad0b-04cf-49a9-badc-9e3184385c5b","Type":"ContainerStarted","Data":"afc78889b9232b1e1eb837d97a756d32923df174072603e28c56e28848d9f696"} Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.564732 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bf5998ddd-nzjmh" event={"ID":"759cb0cb-1122-43e0-800f-2dc054632802","Type":"ContainerStarted","Data":"bc7d94ff1ecb497adecf0a45ce48d2ee384cb85ef343d076d5d7d894b878fb6c"} Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.568829 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"148cf8fd-f7c0-4355-a54b-ff8052293bc4","Type":"ContainerStarted","Data":"5a75377ab27caa7f696d293f602be2f5e070a6d5d85f9fe290c3cbe4b1b1a72f"} Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.570754 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ee115b8e-40cf-4641-acd3-13132054a9b7","Type":"ContainerStarted","Data":"d50f847a2901d4df7835dc73d0cf3443854e05e4ecf4cfe32a2b8f9a40619fd4"} Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.572601 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"28af9db0-905d-46cc-8ab9-887e0f58ee9b","Type":"ContainerStarted","Data":"4c78b8f684b3a7831db74b952fc86d8ccf053b86afcadc2fb903dae0ecb259a4"} Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.574046 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-rhdfs" event={"ID":"b6c7de8a-ddef-4ca7-96a6-0ee096e226e7","Type":"ContainerDied","Data":"4c9ddc5466a8ae7e13057ef3630ac720958f82f4232aa2f68e597890ef8f4a26"} Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.574128 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rhdfs" Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.578975 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"91543550-f764-468a-a1e1-980e3d08aa41","Type":"ContainerStarted","Data":"065a86348ae252725f8a84148e0c18187076b9c72449079f3146f477dba6ccca"} Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.586153 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cf53eabe-609c-471c-ae7e-ca9fb950f86e","Type":"ContainerStarted","Data":"bb99050b0540b54c7742fae2228a9817c44aa3f3b38580d15de769945ef93ca5"} Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.676086 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m72nn\" (UniqueName: \"kubernetes.io/projected/b6c7de8a-ddef-4ca7-96a6-0ee096e226e7-kube-api-access-m72nn\") pod \"b6c7de8a-ddef-4ca7-96a6-0ee096e226e7\" (UID: \"b6c7de8a-ddef-4ca7-96a6-0ee096e226e7\") " Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.676245 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c7de8a-ddef-4ca7-96a6-0ee096e226e7-config\") pod \"b6c7de8a-ddef-4ca7-96a6-0ee096e226e7\" (UID: \"b6c7de8a-ddef-4ca7-96a6-0ee096e226e7\") " Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.677046 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c7de8a-ddef-4ca7-96a6-0ee096e226e7-config" (OuterVolumeSpecName: "config") pod "b6c7de8a-ddef-4ca7-96a6-0ee096e226e7" (UID: "b6c7de8a-ddef-4ca7-96a6-0ee096e226e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.677832 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c7de8a-ddef-4ca7-96a6-0ee096e226e7-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.943559 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c7de8a-ddef-4ca7-96a6-0ee096e226e7-kube-api-access-m72nn" (OuterVolumeSpecName: "kube-api-access-m72nn") pod "b6c7de8a-ddef-4ca7-96a6-0ee096e226e7" (UID: "b6c7de8a-ddef-4ca7-96a6-0ee096e226e7"). InnerVolumeSpecName "kube-api-access-m72nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:35:58 crc kubenswrapper[4718]: I0123 16:35:58.983747 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m72nn\" (UniqueName: \"kubernetes.io/projected/b6c7de8a-ddef-4ca7-96a6-0ee096e226e7-kube-api-access-m72nn\") on node \"crc\" DevicePath \"\"" Jan 23 16:35:59 crc kubenswrapper[4718]: I0123 16:35:59.168254 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac038f8d-d946-4154-9b3c-449f6cba7e81" path="/var/lib/kubelet/pods/ac038f8d-d946-4154-9b3c-449f6cba7e81/volumes" Jan 23 16:35:59 crc kubenswrapper[4718]: I0123 16:35:59.357913 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rhdfs"] Jan 23 16:35:59 crc kubenswrapper[4718]: I0123 16:35:59.365794 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rhdfs"] Jan 23 16:35:59 crc kubenswrapper[4718]: I0123 16:35:59.605482 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c94d7446-3c05-408a-a815-fe9adcb5e785","Type":"ContainerStarted","Data":"75dbeb38b5f16886cb08f46c9697dc967e0bf0d8c07b80289178d7452c3e8e49"} Jan 23 16:35:59 crc kubenswrapper[4718]: I0123 16:35:59.607779 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"39c8c979-1e2b-4757-9b14-3526451859e3","Type":"ContainerStarted","Data":"4fb04f6ffcae1daefee38a46630a8461b943b051c50de755500c97335de1b9db"} Jan 23 16:35:59 crc kubenswrapper[4718]: I0123 16:35:59.610435 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3b829751-ec51-4363-a796-fbf547cb8b6f","Type":"ContainerStarted","Data":"4ee5fb187d594a7dd22f54bcafa87a4a75cd20fc67e027fbd78b5f35074b51db"} Jan 23 16:35:59 crc kubenswrapper[4718]: I0123 16:35:59.612088 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f203e991-61a5-4809-bece-4d99f1e6b53a","Type":"ContainerStarted","Data":"57a7d3e23a91f61a08f1235702ca3507ea01f14b822f4e1690cf5b7da6226024"} Jan 23 16:36:01 crc kubenswrapper[4718]: I0123 16:36:01.165075 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c7de8a-ddef-4ca7-96a6-0ee096e226e7" path="/var/lib/kubelet/pods/b6c7de8a-ddef-4ca7-96a6-0ee096e226e7/volumes" Jan 23 16:36:01 crc kubenswrapper[4718]: I0123 16:36:01.631884 4718 generic.go:334] "Generic (PLEG): container finished" podID="91543550-f764-468a-a1e1-980e3d08aa41" containerID="065a86348ae252725f8a84148e0c18187076b9c72449079f3146f477dba6ccca" exitCode=0 Jan 23 16:36:01 crc kubenswrapper[4718]: I0123 16:36:01.631932 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"91543550-f764-468a-a1e1-980e3d08aa41","Type":"ContainerDied","Data":"065a86348ae252725f8a84148e0c18187076b9c72449079f3146f477dba6ccca"} Jan 23 16:36:03 crc kubenswrapper[4718]: I0123 16:36:03.653808 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ffd9d6ca-1e5b-4102-8b5b-664ebd967619","Type":"ContainerStarted","Data":"ec18c68524e112e54fafd3e8758e57e3960f2db6cb8296c7da6a5d3395f01427"} Jan 23 16:36:03 crc kubenswrapper[4718]: I0123 16:36:03.654507 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 23 16:36:03 crc kubenswrapper[4718]: I0123 16:36:03.685320 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=26.012767499 podStartE2EDuration="30.685291389s" podCreationTimestamp="2026-01-23 16:35:33 +0000 UTC" firstStartedPulling="2026-01-23 16:35:57.266488436 +0000 UTC m=+1158.413730427" lastFinishedPulling="2026-01-23 16:36:01.939012326 +0000 UTC m=+1163.086254317" observedRunningTime="2026-01-23 16:36:03.673936531 +0000 UTC m=+1164.821178532" watchObservedRunningTime="2026-01-23 16:36:03.685291389 +0000 UTC m=+1164.832533380" Jan 23 16:36:04 crc kubenswrapper[4718]: I0123 16:36:04.665119 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"91543550-f764-468a-a1e1-980e3d08aa41","Type":"ContainerStarted","Data":"366ba33dca72f0c0f5bc84780222e871a7b69088b254296527537cb2c51bd7ed"} Jan 23 16:36:04 crc kubenswrapper[4718]: I0123 16:36:04.667796 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f2xs2" event={"ID":"a6481072-75b7-4b67-94a4-94041ef225f6","Type":"ContainerStarted","Data":"811e9dbd70a971cfe6bfeca9f8e804363e7adab96f05acc1151adf0a53792385"} Jan 23 16:36:04 crc kubenswrapper[4718]: I0123 16:36:04.669969 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-js85h" event={"ID":"206601f2-166b-4dcf-9f9b-77a64e3f6c5b","Type":"ContainerStarted","Data":"d18240d3c54c57b3d7bd8702c1b84c4fd0a0c10c9b6f996590326efebdd8c324"} Jan 23 16:36:04 crc kubenswrapper[4718]: I0123 16:36:04.687155 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=33.17674876 podStartE2EDuration="33.687139666s" podCreationTimestamp="2026-01-23 16:35:31 +0000 UTC" firstStartedPulling="2026-01-23 16:35:57.26593068 +0000 UTC m=+1158.413172671" lastFinishedPulling="2026-01-23 16:35:57.776321586 +0000 UTC m=+1158.923563577" observedRunningTime="2026-01-23 16:36:04.685085171 +0000 UTC m=+1165.832327172" watchObservedRunningTime="2026-01-23 16:36:04.687139666 +0000 UTC m=+1165.834381657" Jan 23 16:36:04 crc kubenswrapper[4718]: I0123 16:36:04.721364 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-js85h" podStartSLOduration=23.417970487 podStartE2EDuration="28.721336096s" podCreationTimestamp="2026-01-23 16:35:36 +0000 UTC" firstStartedPulling="2026-01-23 16:35:57.259984879 +0000 UTC m=+1158.407226870" lastFinishedPulling="2026-01-23 16:36:02.563350498 +0000 UTC m=+1163.710592479" observedRunningTime="2026-01-23 16:36:04.720330668 +0000 UTC m=+1165.867572659" watchObservedRunningTime="2026-01-23 16:36:04.721336096 +0000 UTC m=+1165.868578087" Jan 23 16:36:05 crc kubenswrapper[4718]: I0123 16:36:05.681313 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ee115b8e-40cf-4641-acd3-13132054a9b7","Type":"ContainerStarted","Data":"4a0ed19db5ccddc950959c3ff7b9edcafdf4b4f5c8a6e0fd82d0ab7abffb1eee"} Jan 23 16:36:05 crc kubenswrapper[4718]: I0123 16:36:05.681825 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 16:36:05 crc kubenswrapper[4718]: I0123 16:36:05.683901 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"28af9db0-905d-46cc-8ab9-887e0f58ee9b","Type":"ContainerStarted","Data":"3a8a00e8209ae834c498978e1bd77e82e58caa6671cc459c1f68f38785a2a4f8"} Jan 23 16:36:05 crc kubenswrapper[4718]: I0123 16:36:05.686379 4718 generic.go:334] "Generic (PLEG): container finished" podID="a6481072-75b7-4b67-94a4-94041ef225f6" containerID="811e9dbd70a971cfe6bfeca9f8e804363e7adab96f05acc1151adf0a53792385" exitCode=0 Jan 23 16:36:05 crc kubenswrapper[4718]: I0123 16:36:05.686446 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f2xs2" event={"ID":"a6481072-75b7-4b67-94a4-94041ef225f6","Type":"ContainerDied","Data":"811e9dbd70a971cfe6bfeca9f8e804363e7adab96f05acc1151adf0a53792385"} Jan 23 16:36:05 crc kubenswrapper[4718]: I0123 16:36:05.688593 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cf53eabe-609c-471c-ae7e-ca9fb950f86e","Type":"ContainerStarted","Data":"054822885d1e7432d1f69c43c422a813eba6ad79d2a7e67a5f008f59a63d8f8b"} Jan 23 16:36:05 crc kubenswrapper[4718]: I0123 16:36:05.689956 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9rfg" event={"ID":"d2e5ad0b-04cf-49a9-badc-9e3184385c5b","Type":"ContainerStarted","Data":"a065e00386dd5c7363228f4da6ebb08022311e75a9e5f6819fbeefcd31cd4350"} Jan 23 16:36:05 crc kubenswrapper[4718]: I0123 16:36:05.690023 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-c9rfg" Jan 23 16:36:05 crc kubenswrapper[4718]: I0123 16:36:05.699212 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=24.054992685 podStartE2EDuration="30.699189862s" podCreationTimestamp="2026-01-23 16:35:35 +0000 UTC" firstStartedPulling="2026-01-23 16:35:58.152760384 +0000 UTC m=+1159.300002375" lastFinishedPulling="2026-01-23 16:36:04.796957561 +0000 UTC m=+1165.944199552" observedRunningTime="2026-01-23 16:36:05.695117331 +0000 UTC m=+1166.842359322" watchObservedRunningTime="2026-01-23 16:36:05.699189862 +0000 UTC m=+1166.846431853" Jan 23 16:36:05 crc kubenswrapper[4718]: I0123 16:36:05.719642 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-c9rfg" podStartSLOduration=22.246232135 podStartE2EDuration="26.719606556s" podCreationTimestamp="2026-01-23 16:35:39 +0000 UTC" firstStartedPulling="2026-01-23 16:35:58.090014068 +0000 UTC m=+1159.237256059" lastFinishedPulling="2026-01-23 16:36:02.563388489 +0000 UTC m=+1163.710630480" observedRunningTime="2026-01-23 16:36:05.713076899 +0000 UTC m=+1166.860318890" watchObservedRunningTime="2026-01-23 16:36:05.719606556 +0000 UTC m=+1166.866848547" Jan 23 16:36:06 crc kubenswrapper[4718]: I0123 16:36:06.703939 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f2xs2" event={"ID":"a6481072-75b7-4b67-94a4-94041ef225f6","Type":"ContainerStarted","Data":"b88bdbf0641bfd3ebe094620ba4601799d0798c4ff269d3700e5d28043b35ccf"} Jan 23 16:36:07 crc kubenswrapper[4718]: I0123 16:36:07.120933 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:36:07 crc kubenswrapper[4718]: I0123 16:36:07.121574 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:36:07 crc kubenswrapper[4718]: I0123 16:36:07.128421 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:36:07 crc kubenswrapper[4718]: I0123 16:36:07.722654 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"148cf8fd-f7c0-4355-a54b-ff8052293bc4","Type":"ContainerStarted","Data":"62f23b39e112ce7efcce20f00cec1b8b83fbf3ccbf4e593270225b1ca91c68db"} Jan 23 16:36:07 crc kubenswrapper[4718]: I0123 16:36:07.727225 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-bf5998ddd-nzjmh" Jan 23 16:36:07 crc kubenswrapper[4718]: I0123 16:36:07.844183 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-69ccdfb68b-l4gxm"] Jan 23 16:36:08 crc kubenswrapper[4718]: I0123 16:36:08.632776 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 23 16:36:08 crc kubenswrapper[4718]: I0123 16:36:08.731433 4718 generic.go:334] "Generic (PLEG): container finished" podID="a5ce9192-bc15-453c-a45d-c242b273bb74" containerID="21d750f8e7f11a9f9dda4098160cf3d6e9931b9e07ccd7545cd93951b27557a1" exitCode=0 Jan 23 16:36:08 crc kubenswrapper[4718]: I0123 16:36:08.731516 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" event={"ID":"a5ce9192-bc15-453c-a45d-c242b273bb74","Type":"ContainerDied","Data":"21d750f8e7f11a9f9dda4098160cf3d6e9931b9e07ccd7545cd93951b27557a1"} Jan 23 16:36:08 crc kubenswrapper[4718]: I0123 16:36:08.734100 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cf53eabe-609c-471c-ae7e-ca9fb950f86e","Type":"ContainerStarted","Data":"b4bea9bb05cbda805d103a05776103dd09ea88b906eff899f5d0a238444a7a9a"} Jan 23 16:36:08 crc kubenswrapper[4718]: I0123 16:36:08.740620 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"28af9db0-905d-46cc-8ab9-887e0f58ee9b","Type":"ContainerStarted","Data":"e39fc9a501a0e912789be9f8449f6c7e22c224dd0f3322e1dccf761f0a2aba1b"} Jan 23 16:36:08 crc kubenswrapper[4718]: I0123 16:36:08.747335 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f2xs2" event={"ID":"a6481072-75b7-4b67-94a4-94041ef225f6","Type":"ContainerStarted","Data":"48ad8199f848d71b3539fe218d97bac6b8375056c0fa346523c6f03814c408d3"} Jan 23 16:36:08 crc kubenswrapper[4718]: I0123 16:36:08.747912 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:36:08 crc kubenswrapper[4718]: I0123 16:36:08.748280 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:36:08 crc kubenswrapper[4718]: I0123 16:36:08.787164 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-f2xs2" podStartSLOduration=24.568894808 podStartE2EDuration="29.787140444s" podCreationTimestamp="2026-01-23 16:35:39 +0000 UTC" firstStartedPulling="2026-01-23 16:35:57.266525627 +0000 UTC m=+1158.413767638" lastFinishedPulling="2026-01-23 16:36:02.484771283 +0000 UTC m=+1163.632013274" observedRunningTime="2026-01-23 16:36:08.786783284 +0000 UTC m=+1169.934025275" watchObservedRunningTime="2026-01-23 16:36:08.787140444 +0000 UTC m=+1169.934382425" Jan 23 16:36:08 crc kubenswrapper[4718]: I0123 16:36:08.818085 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=18.001154672 podStartE2EDuration="27.818059094s" podCreationTimestamp="2026-01-23 16:35:41 +0000 UTC" firstStartedPulling="2026-01-23 16:35:58.252238566 +0000 UTC m=+1159.399480557" lastFinishedPulling="2026-01-23 16:36:08.069142988 +0000 UTC m=+1169.216384979" observedRunningTime="2026-01-23 16:36:08.809313936 +0000 UTC m=+1169.956555937" watchObservedRunningTime="2026-01-23 16:36:08.818059094 +0000 UTC m=+1169.965301085" Jan 23 16:36:08 crc kubenswrapper[4718]: I0123 16:36:08.834904 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=21.541358371 podStartE2EDuration="31.834877921s" podCreationTimestamp="2026-01-23 16:35:37 +0000 UTC" firstStartedPulling="2026-01-23 16:35:57.806478056 +0000 UTC m=+1158.953720037" lastFinishedPulling="2026-01-23 16:36:08.099997596 +0000 UTC m=+1169.247239587" observedRunningTime="2026-01-23 16:36:08.829152595 +0000 UTC m=+1169.976394586" watchObservedRunningTime="2026-01-23 16:36:08.834877921 +0000 UTC m=+1169.982119912" Jan 23 16:36:09 crc kubenswrapper[4718]: I0123 16:36:09.256067 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 23 16:36:09 crc kubenswrapper[4718]: I0123 16:36:09.256553 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 23 16:36:09 crc kubenswrapper[4718]: I0123 16:36:09.301672 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 23 16:36:09 crc kubenswrapper[4718]: I0123 16:36:09.851068 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" event={"ID":"a5ce9192-bc15-453c-a45d-c242b273bb74","Type":"ContainerStarted","Data":"1b655d0062dfc1b1d7f6aa468e8e985d15cc4c735b7ebf17e710450d7b86eb15"} Jan 23 16:36:09 crc kubenswrapper[4718]: I0123 16:36:09.852307 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" Jan 23 16:36:09 crc kubenswrapper[4718]: I0123 16:36:09.859192 4718 generic.go:334] "Generic (PLEG): container finished" podID="4d43ef11-3b10-4dfd-981b-4d981157db0e" containerID="c0fa4828a91fc7eef29b812e07730f459641f145df3b962a55e6134f5e916893" exitCode=0 Jan 23 16:36:09 crc kubenswrapper[4718]: I0123 16:36:09.859608 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" event={"ID":"4d43ef11-3b10-4dfd-981b-4d981157db0e","Type":"ContainerDied","Data":"c0fa4828a91fc7eef29b812e07730f459641f145df3b962a55e6134f5e916893"} Jan 23 16:36:09 crc kubenswrapper[4718]: I0123 16:36:09.904828 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" podStartSLOduration=3.222732668 podStartE2EDuration="40.904793088s" podCreationTimestamp="2026-01-23 16:35:29 +0000 UTC" firstStartedPulling="2026-01-23 16:35:30.415999693 +0000 UTC m=+1131.563241684" lastFinishedPulling="2026-01-23 16:36:08.098060103 +0000 UTC m=+1169.245302104" observedRunningTime="2026-01-23 16:36:09.880178879 +0000 UTC m=+1171.027420870" watchObservedRunningTime="2026-01-23 16:36:09.904793088 +0000 UTC m=+1171.052035109" Jan 23 16:36:09 crc kubenswrapper[4718]: I0123 16:36:09.987303 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.062046 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.102393 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.270342 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q6p2x"] Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.287606 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-mwsph"] Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.289584 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.305336 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.310248 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mwsph"] Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.356183 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-l777x"] Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.364154 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-l777x" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.372418 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.381964 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-l777x"] Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.400068 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/760d2aa5-6dd3-43b1-8447-b1d1e655ee14-ovs-rundir\") pod \"ovn-controller-metrics-mwsph\" (UID: \"760d2aa5-6dd3-43b1-8447-b1d1e655ee14\") " pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.400146 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz6sp\" (UniqueName: \"kubernetes.io/projected/760d2aa5-6dd3-43b1-8447-b1d1e655ee14-kube-api-access-dz6sp\") pod \"ovn-controller-metrics-mwsph\" (UID: \"760d2aa5-6dd3-43b1-8447-b1d1e655ee14\") " pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.400173 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760d2aa5-6dd3-43b1-8447-b1d1e655ee14-combined-ca-bundle\") pod \"ovn-controller-metrics-mwsph\" (UID: \"760d2aa5-6dd3-43b1-8447-b1d1e655ee14\") " pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.400530 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/760d2aa5-6dd3-43b1-8447-b1d1e655ee14-ovn-rundir\") pod \"ovn-controller-metrics-mwsph\" (UID: \"760d2aa5-6dd3-43b1-8447-b1d1e655ee14\") " pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.400669 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/760d2aa5-6dd3-43b1-8447-b1d1e655ee14-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mwsph\" (UID: \"760d2aa5-6dd3-43b1-8447-b1d1e655ee14\") " pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.401041 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760d2aa5-6dd3-43b1-8447-b1d1e655ee14-config\") pod \"ovn-controller-metrics-mwsph\" (UID: \"760d2aa5-6dd3-43b1-8447-b1d1e655ee14\") " pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.502780 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760d2aa5-6dd3-43b1-8447-b1d1e655ee14-config\") pod \"ovn-controller-metrics-mwsph\" (UID: \"760d2aa5-6dd3-43b1-8447-b1d1e655ee14\") " pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.502878 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnrks\" (UniqueName: \"kubernetes.io/projected/1cb00997-1b0e-4e89-ad5e-78655d0214db-kube-api-access-xnrks\") pod \"dnsmasq-dns-6bc7876d45-l777x\" (UID: \"1cb00997-1b0e-4e89-ad5e-78655d0214db\") " pod="openstack/dnsmasq-dns-6bc7876d45-l777x" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.502909 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/760d2aa5-6dd3-43b1-8447-b1d1e655ee14-ovs-rundir\") pod \"ovn-controller-metrics-mwsph\" (UID: \"760d2aa5-6dd3-43b1-8447-b1d1e655ee14\") " pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.502938 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb00997-1b0e-4e89-ad5e-78655d0214db-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-l777x\" (UID: \"1cb00997-1b0e-4e89-ad5e-78655d0214db\") " pod="openstack/dnsmasq-dns-6bc7876d45-l777x" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.502960 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz6sp\" (UniqueName: \"kubernetes.io/projected/760d2aa5-6dd3-43b1-8447-b1d1e655ee14-kube-api-access-dz6sp\") pod \"ovn-controller-metrics-mwsph\" (UID: \"760d2aa5-6dd3-43b1-8447-b1d1e655ee14\") " pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.502978 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760d2aa5-6dd3-43b1-8447-b1d1e655ee14-combined-ca-bundle\") pod \"ovn-controller-metrics-mwsph\" (UID: \"760d2aa5-6dd3-43b1-8447-b1d1e655ee14\") " pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.503034 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/760d2aa5-6dd3-43b1-8447-b1d1e655ee14-ovn-rundir\") pod \"ovn-controller-metrics-mwsph\" (UID: \"760d2aa5-6dd3-43b1-8447-b1d1e655ee14\") " pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.503144 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb00997-1b0e-4e89-ad5e-78655d0214db-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-l777x\" (UID: \"1cb00997-1b0e-4e89-ad5e-78655d0214db\") " pod="openstack/dnsmasq-dns-6bc7876d45-l777x" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.503233 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/760d2aa5-6dd3-43b1-8447-b1d1e655ee14-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mwsph\" (UID: \"760d2aa5-6dd3-43b1-8447-b1d1e655ee14\") " pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.503293 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb00997-1b0e-4e89-ad5e-78655d0214db-config\") pod \"dnsmasq-dns-6bc7876d45-l777x\" (UID: \"1cb00997-1b0e-4e89-ad5e-78655d0214db\") " pod="openstack/dnsmasq-dns-6bc7876d45-l777x" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.503363 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/760d2aa5-6dd3-43b1-8447-b1d1e655ee14-ovn-rundir\") pod \"ovn-controller-metrics-mwsph\" (UID: \"760d2aa5-6dd3-43b1-8447-b1d1e655ee14\") " pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.503361 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/760d2aa5-6dd3-43b1-8447-b1d1e655ee14-ovs-rundir\") pod \"ovn-controller-metrics-mwsph\" (UID: \"760d2aa5-6dd3-43b1-8447-b1d1e655ee14\") " pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.503950 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760d2aa5-6dd3-43b1-8447-b1d1e655ee14-config\") pod \"ovn-controller-metrics-mwsph\" (UID: \"760d2aa5-6dd3-43b1-8447-b1d1e655ee14\") " pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.521024 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760d2aa5-6dd3-43b1-8447-b1d1e655ee14-combined-ca-bundle\") pod \"ovn-controller-metrics-mwsph\" (UID: \"760d2aa5-6dd3-43b1-8447-b1d1e655ee14\") " pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.521135 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz6sp\" (UniqueName: \"kubernetes.io/projected/760d2aa5-6dd3-43b1-8447-b1d1e655ee14-kube-api-access-dz6sp\") pod \"ovn-controller-metrics-mwsph\" (UID: \"760d2aa5-6dd3-43b1-8447-b1d1e655ee14\") " pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.521555 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/760d2aa5-6dd3-43b1-8447-b1d1e655ee14-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mwsph\" (UID: \"760d2aa5-6dd3-43b1-8447-b1d1e655ee14\") " pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.605204 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb00997-1b0e-4e89-ad5e-78655d0214db-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-l777x\" (UID: \"1cb00997-1b0e-4e89-ad5e-78655d0214db\") " pod="openstack/dnsmasq-dns-6bc7876d45-l777x" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.605309 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb00997-1b0e-4e89-ad5e-78655d0214db-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-l777x\" (UID: \"1cb00997-1b0e-4e89-ad5e-78655d0214db\") " pod="openstack/dnsmasq-dns-6bc7876d45-l777x" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.605342 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb00997-1b0e-4e89-ad5e-78655d0214db-config\") pod \"dnsmasq-dns-6bc7876d45-l777x\" (UID: \"1cb00997-1b0e-4e89-ad5e-78655d0214db\") " pod="openstack/dnsmasq-dns-6bc7876d45-l777x" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.605434 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnrks\" (UniqueName: \"kubernetes.io/projected/1cb00997-1b0e-4e89-ad5e-78655d0214db-kube-api-access-xnrks\") pod \"dnsmasq-dns-6bc7876d45-l777x\" (UID: \"1cb00997-1b0e-4e89-ad5e-78655d0214db\") " pod="openstack/dnsmasq-dns-6bc7876d45-l777x" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.606753 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb00997-1b0e-4e89-ad5e-78655d0214db-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-l777x\" (UID: \"1cb00997-1b0e-4e89-ad5e-78655d0214db\") " pod="openstack/dnsmasq-dns-6bc7876d45-l777x" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.607350 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb00997-1b0e-4e89-ad5e-78655d0214db-config\") pod \"dnsmasq-dns-6bc7876d45-l777x\" (UID: \"1cb00997-1b0e-4e89-ad5e-78655d0214db\") " pod="openstack/dnsmasq-dns-6bc7876d45-l777x" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.607469 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb00997-1b0e-4e89-ad5e-78655d0214db-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-l777x\" (UID: \"1cb00997-1b0e-4e89-ad5e-78655d0214db\") " pod="openstack/dnsmasq-dns-6bc7876d45-l777x" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.623706 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mwsph" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.639494 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnrks\" (UniqueName: \"kubernetes.io/projected/1cb00997-1b0e-4e89-ad5e-78655d0214db-kube-api-access-xnrks\") pod \"dnsmasq-dns-6bc7876d45-l777x\" (UID: \"1cb00997-1b0e-4e89-ad5e-78655d0214db\") " pod="openstack/dnsmasq-dns-6bc7876d45-l777x" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.688273 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-l777x" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.718530 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mdkn8"] Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.752744 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-w2vmx"] Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.766147 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.771355 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.772208 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-w2vmx"] Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.891065 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" podUID="4d43ef11-3b10-4dfd-981b-4d981157db0e" containerName="dnsmasq-dns" containerID="cri-o://7ae3bbaa3e6f42ed2a9b3467da744fa75c745571f1ff506cf9c6f84edb4b029e" gracePeriod=10 Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.891351 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" event={"ID":"4d43ef11-3b10-4dfd-981b-4d981157db0e","Type":"ContainerStarted","Data":"7ae3bbaa3e6f42ed2a9b3467da744fa75c745571f1ff506cf9c6f84edb4b029e"} Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.895209 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.895240 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.913871 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lc97\" (UniqueName: \"kubernetes.io/projected/11cd1f3c-a5a5-468a-b175-5c025448f1a8-kube-api-access-4lc97\") pod \"dnsmasq-dns-8554648995-w2vmx\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.914463 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-w2vmx\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.914527 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-config\") pod \"dnsmasq-dns-8554648995-w2vmx\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.914573 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-dns-svc\") pod \"dnsmasq-dns-8554648995-w2vmx\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.914600 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-w2vmx\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.920863 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" podStartSLOduration=-9223371994.933935 podStartE2EDuration="41.920840411s" podCreationTimestamp="2026-01-23 16:35:29 +0000 UTC" firstStartedPulling="2026-01-23 16:35:31.166359149 +0000 UTC m=+1132.313601140" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:36:10.91419259 +0000 UTC m=+1172.061434581" watchObservedRunningTime="2026-01-23 16:36:10.920840411 +0000 UTC m=+1172.068082402" Jan 23 16:36:10 crc kubenswrapper[4718]: I0123 16:36:10.967838 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.016537 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lc97\" (UniqueName: \"kubernetes.io/projected/11cd1f3c-a5a5-468a-b175-5c025448f1a8-kube-api-access-4lc97\") pod \"dnsmasq-dns-8554648995-w2vmx\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.016649 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-w2vmx\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.016709 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-config\") pod \"dnsmasq-dns-8554648995-w2vmx\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.016750 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-dns-svc\") pod \"dnsmasq-dns-8554648995-w2vmx\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.016773 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-w2vmx\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.019032 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-w2vmx\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.019759 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-dns-svc\") pod \"dnsmasq-dns-8554648995-w2vmx\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.020730 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-w2vmx\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.020759 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-config\") pod \"dnsmasq-dns-8554648995-w2vmx\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.036983 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lc97\" (UniqueName: \"kubernetes.io/projected/11cd1f3c-a5a5-468a-b175-5c025448f1a8-kube-api-access-4lc97\") pod \"dnsmasq-dns-8554648995-w2vmx\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.095476 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.195423 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.197667 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.208526 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.208775 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.208924 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.214140 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-7bpfz" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.240028 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.331745 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-l777x"] Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.345820 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/873a6275-b8f2-4554-9c4d-f44a6629111d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.345896 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pklpf\" (UniqueName: \"kubernetes.io/projected/873a6275-b8f2-4554-9c4d-f44a6629111d-kube-api-access-pklpf\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.346024 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/873a6275-b8f2-4554-9c4d-f44a6629111d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.346113 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/873a6275-b8f2-4554-9c4d-f44a6629111d-scripts\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.346217 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873a6275-b8f2-4554-9c4d-f44a6629111d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.346381 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/873a6275-b8f2-4554-9c4d-f44a6629111d-config\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.346408 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/873a6275-b8f2-4554-9c4d-f44a6629111d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.355825 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mwsph"] Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.447617 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/873a6275-b8f2-4554-9c4d-f44a6629111d-config\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.447796 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/873a6275-b8f2-4554-9c4d-f44a6629111d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.447936 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/873a6275-b8f2-4554-9c4d-f44a6629111d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.447969 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pklpf\" (UniqueName: \"kubernetes.io/projected/873a6275-b8f2-4554-9c4d-f44a6629111d-kube-api-access-pklpf\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.448021 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/873a6275-b8f2-4554-9c4d-f44a6629111d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.448072 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/873a6275-b8f2-4554-9c4d-f44a6629111d-scripts\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.448112 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873a6275-b8f2-4554-9c4d-f44a6629111d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.449201 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/873a6275-b8f2-4554-9c4d-f44a6629111d-config\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.450273 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/873a6275-b8f2-4554-9c4d-f44a6629111d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.450296 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/873a6275-b8f2-4554-9c4d-f44a6629111d-scripts\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.453041 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873a6275-b8f2-4554-9c4d-f44a6629111d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.455841 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/873a6275-b8f2-4554-9c4d-f44a6629111d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.458964 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/873a6275-b8f2-4554-9c4d-f44a6629111d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.470447 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pklpf\" (UniqueName: \"kubernetes.io/projected/873a6275-b8f2-4554-9c4d-f44a6629111d-kube-api-access-pklpf\") pod \"ovn-northd-0\" (UID: \"873a6275-b8f2-4554-9c4d-f44a6629111d\") " pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.493140 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.550909 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d43ef11-3b10-4dfd-981b-4d981157db0e-config\") pod \"4d43ef11-3b10-4dfd-981b-4d981157db0e\" (UID: \"4d43ef11-3b10-4dfd-981b-4d981157db0e\") " Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.551110 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d43ef11-3b10-4dfd-981b-4d981157db0e-dns-svc\") pod \"4d43ef11-3b10-4dfd-981b-4d981157db0e\" (UID: \"4d43ef11-3b10-4dfd-981b-4d981157db0e\") " Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.551183 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vft64\" (UniqueName: \"kubernetes.io/projected/4d43ef11-3b10-4dfd-981b-4d981157db0e-kube-api-access-vft64\") pod \"4d43ef11-3b10-4dfd-981b-4d981157db0e\" (UID: \"4d43ef11-3b10-4dfd-981b-4d981157db0e\") " Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.557442 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d43ef11-3b10-4dfd-981b-4d981157db0e-kube-api-access-vft64" (OuterVolumeSpecName: "kube-api-access-vft64") pod "4d43ef11-3b10-4dfd-981b-4d981157db0e" (UID: "4d43ef11-3b10-4dfd-981b-4d981157db0e"). InnerVolumeSpecName "kube-api-access-vft64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.629780 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d43ef11-3b10-4dfd-981b-4d981157db0e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4d43ef11-3b10-4dfd-981b-4d981157db0e" (UID: "4d43ef11-3b10-4dfd-981b-4d981157db0e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.637184 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d43ef11-3b10-4dfd-981b-4d981157db0e-config" (OuterVolumeSpecName: "config") pod "4d43ef11-3b10-4dfd-981b-4d981157db0e" (UID: "4d43ef11-3b10-4dfd-981b-4d981157db0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.660176 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d43ef11-3b10-4dfd-981b-4d981157db0e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.660226 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vft64\" (UniqueName: \"kubernetes.io/projected/4d43ef11-3b10-4dfd-981b-4d981157db0e-kube-api-access-vft64\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.660250 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d43ef11-3b10-4dfd-981b-4d981157db0e-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.703912 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-w2vmx"] Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.717020 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.905080 4718 generic.go:334] "Generic (PLEG): container finished" podID="4d43ef11-3b10-4dfd-981b-4d981157db0e" containerID="7ae3bbaa3e6f42ed2a9b3467da744fa75c745571f1ff506cf9c6f84edb4b029e" exitCode=0 Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.905153 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" event={"ID":"4d43ef11-3b10-4dfd-981b-4d981157db0e","Type":"ContainerDied","Data":"7ae3bbaa3e6f42ed2a9b3467da744fa75c745571f1ff506cf9c6f84edb4b029e"} Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.905219 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.905695 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mdkn8" event={"ID":"4d43ef11-3b10-4dfd-981b-4d981157db0e","Type":"ContainerDied","Data":"7a799b6327ed42b8fbc75e39e48e4a8dc8c602869cdeca6f0176107c05a3552f"} Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.905729 4718 scope.go:117] "RemoveContainer" containerID="7ae3bbaa3e6f42ed2a9b3467da744fa75c745571f1ff506cf9c6f84edb4b029e" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.909715 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-w2vmx" event={"ID":"11cd1f3c-a5a5-468a-b175-5c025448f1a8","Type":"ContainerStarted","Data":"3aa3cca13b9c61c9462ac1cc403cd06b6f004c4b3c0bc8dcd795b7334070770e"} Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.911452 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mwsph" event={"ID":"760d2aa5-6dd3-43b1-8447-b1d1e655ee14","Type":"ContainerStarted","Data":"0874d7c9d2db771bec2062143550f82e2bdd8456e68b4e04baef7d158f9e94fa"} Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.911889 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mwsph" event={"ID":"760d2aa5-6dd3-43b1-8447-b1d1e655ee14","Type":"ContainerStarted","Data":"ae9618eefd6c1a9b636493fd75964b31108c7ba1e0c81a685061db96795f83cf"} Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.913154 4718 generic.go:334] "Generic (PLEG): container finished" podID="1cb00997-1b0e-4e89-ad5e-78655d0214db" containerID="50e5023062f383d24982b1d58875d056f480f21d5140055939528a8be9dc5928" exitCode=0 Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.913499 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-l777x" event={"ID":"1cb00997-1b0e-4e89-ad5e-78655d0214db","Type":"ContainerDied","Data":"50e5023062f383d24982b1d58875d056f480f21d5140055939528a8be9dc5928"} Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.913548 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-l777x" event={"ID":"1cb00997-1b0e-4e89-ad5e-78655d0214db","Type":"ContainerStarted","Data":"4ff212d06e567694d3ce7e990865e701de3c950afa91f912d12d71123017faf7"} Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.914234 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" podUID="a5ce9192-bc15-453c-a45d-c242b273bb74" containerName="dnsmasq-dns" containerID="cri-o://1b655d0062dfc1b1d7f6aa468e8e985d15cc4c735b7ebf17e710450d7b86eb15" gracePeriod=10 Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.939484 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-mwsph" podStartSLOduration=1.9394536740000001 podStartE2EDuration="1.939453674s" podCreationTimestamp="2026-01-23 16:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:36:11.932110915 +0000 UTC m=+1173.079352906" watchObservedRunningTime="2026-01-23 16:36:11.939453674 +0000 UTC m=+1173.086695665" Jan 23 16:36:11 crc kubenswrapper[4718]: I0123 16:36:11.969990 4718 scope.go:117] "RemoveContainer" containerID="c0fa4828a91fc7eef29b812e07730f459641f145df3b962a55e6134f5e916893" Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.015656 4718 scope.go:117] "RemoveContainer" containerID="7ae3bbaa3e6f42ed2a9b3467da744fa75c745571f1ff506cf9c6f84edb4b029e" Jan 23 16:36:12 crc kubenswrapper[4718]: E0123 16:36:12.016078 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ae3bbaa3e6f42ed2a9b3467da744fa75c745571f1ff506cf9c6f84edb4b029e\": container with ID starting with 7ae3bbaa3e6f42ed2a9b3467da744fa75c745571f1ff506cf9c6f84edb4b029e not found: ID does not exist" containerID="7ae3bbaa3e6f42ed2a9b3467da744fa75c745571f1ff506cf9c6f84edb4b029e" Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.016117 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ae3bbaa3e6f42ed2a9b3467da744fa75c745571f1ff506cf9c6f84edb4b029e"} err="failed to get container status \"7ae3bbaa3e6f42ed2a9b3467da744fa75c745571f1ff506cf9c6f84edb4b029e\": rpc error: code = NotFound desc = could not find container \"7ae3bbaa3e6f42ed2a9b3467da744fa75c745571f1ff506cf9c6f84edb4b029e\": container with ID starting with 7ae3bbaa3e6f42ed2a9b3467da744fa75c745571f1ff506cf9c6f84edb4b029e not found: ID does not exist" Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.016147 4718 scope.go:117] "RemoveContainer" containerID="c0fa4828a91fc7eef29b812e07730f459641f145df3b962a55e6134f5e916893" Jan 23 16:36:12 crc kubenswrapper[4718]: E0123 16:36:12.016461 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0fa4828a91fc7eef29b812e07730f459641f145df3b962a55e6134f5e916893\": container with ID starting with c0fa4828a91fc7eef29b812e07730f459641f145df3b962a55e6134f5e916893 not found: ID does not exist" containerID="c0fa4828a91fc7eef29b812e07730f459641f145df3b962a55e6134f5e916893" Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.016521 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0fa4828a91fc7eef29b812e07730f459641f145df3b962a55e6134f5e916893"} err="failed to get container status \"c0fa4828a91fc7eef29b812e07730f459641f145df3b962a55e6134f5e916893\": rpc error: code = NotFound desc = could not find container \"c0fa4828a91fc7eef29b812e07730f459641f145df3b962a55e6134f5e916893\": container with ID starting with c0fa4828a91fc7eef29b812e07730f459641f145df3b962a55e6134f5e916893 not found: ID does not exist" Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.047861 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mdkn8"] Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.077211 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mdkn8"] Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.280937 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.491189 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.590513 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ce9192-bc15-453c-a45d-c242b273bb74-config\") pod \"a5ce9192-bc15-453c-a45d-c242b273bb74\" (UID: \"a5ce9192-bc15-453c-a45d-c242b273bb74\") " Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.590568 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5ce9192-bc15-453c-a45d-c242b273bb74-dns-svc\") pod \"a5ce9192-bc15-453c-a45d-c242b273bb74\" (UID: \"a5ce9192-bc15-453c-a45d-c242b273bb74\") " Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.590894 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv9cd\" (UniqueName: \"kubernetes.io/projected/a5ce9192-bc15-453c-a45d-c242b273bb74-kube-api-access-hv9cd\") pod \"a5ce9192-bc15-453c-a45d-c242b273bb74\" (UID: \"a5ce9192-bc15-453c-a45d-c242b273bb74\") " Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.597428 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ce9192-bc15-453c-a45d-c242b273bb74-kube-api-access-hv9cd" (OuterVolumeSpecName: "kube-api-access-hv9cd") pod "a5ce9192-bc15-453c-a45d-c242b273bb74" (UID: "a5ce9192-bc15-453c-a45d-c242b273bb74"). InnerVolumeSpecName "kube-api-access-hv9cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.648412 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ce9192-bc15-453c-a45d-c242b273bb74-config" (OuterVolumeSpecName: "config") pod "a5ce9192-bc15-453c-a45d-c242b273bb74" (UID: "a5ce9192-bc15-453c-a45d-c242b273bb74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.648850 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ce9192-bc15-453c-a45d-c242b273bb74-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a5ce9192-bc15-453c-a45d-c242b273bb74" (UID: "a5ce9192-bc15-453c-a45d-c242b273bb74"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.693416 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv9cd\" (UniqueName: \"kubernetes.io/projected/a5ce9192-bc15-453c-a45d-c242b273bb74-kube-api-access-hv9cd\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.693455 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ce9192-bc15-453c-a45d-c242b273bb74-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.693464 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5ce9192-bc15-453c-a45d-c242b273bb74-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.926418 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"592a76d3-742f-47a0-9054-309fb2670fa3","Type":"ContainerStarted","Data":"91137858c80da0a1ba88154acdf9707d48f9103850d456e9ff054ce427723e74"} Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.933241 4718 generic.go:334] "Generic (PLEG): container finished" podID="a5ce9192-bc15-453c-a45d-c242b273bb74" containerID="1b655d0062dfc1b1d7f6aa468e8e985d15cc4c735b7ebf17e710450d7b86eb15" exitCode=0 Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.933282 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.933397 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" event={"ID":"a5ce9192-bc15-453c-a45d-c242b273bb74","Type":"ContainerDied","Data":"1b655d0062dfc1b1d7f6aa468e8e985d15cc4c735b7ebf17e710450d7b86eb15"} Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.933440 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-q6p2x" event={"ID":"a5ce9192-bc15-453c-a45d-c242b273bb74","Type":"ContainerDied","Data":"04196eadc0f99a0292420dc6d7b3a8a0c3395e67f56e996a6e990bfa095934de"} Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.933459 4718 scope.go:117] "RemoveContainer" containerID="1b655d0062dfc1b1d7f6aa468e8e985d15cc4c735b7ebf17e710450d7b86eb15" Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.944567 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"873a6275-b8f2-4554-9c4d-f44a6629111d","Type":"ContainerStarted","Data":"c286d9aea39a9b7b2e836d61fedcc57fca57502b1d1df249ba5ba7bd84c6d151"} Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.949227 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-l777x" event={"ID":"1cb00997-1b0e-4e89-ad5e-78655d0214db","Type":"ContainerStarted","Data":"4b33ed99fcc7f366ac0280b285574bdc80b9a40e2ba59550ef81dddda818e9e2"} Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.950589 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-l777x" Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.961452 4718 generic.go:334] "Generic (PLEG): container finished" podID="11cd1f3c-a5a5-468a-b175-5c025448f1a8" containerID="67c7e1b8eb6df05840fee5a0f97e33615de2b6547cc7d4d5fd20d81d1a2d92ba" exitCode=0 Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.961606 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-w2vmx" event={"ID":"11cd1f3c-a5a5-468a-b175-5c025448f1a8","Type":"ContainerDied","Data":"67c7e1b8eb6df05840fee5a0f97e33615de2b6547cc7d4d5fd20d81d1a2d92ba"} Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.978856 4718 scope.go:117] "RemoveContainer" containerID="21d750f8e7f11a9f9dda4098160cf3d6e9931b9e07ccd7545cd93951b27557a1" Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.988475 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q6p2x"] Jan 23 16:36:12 crc kubenswrapper[4718]: I0123 16:36:12.999880 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q6p2x"] Jan 23 16:36:13 crc kubenswrapper[4718]: I0123 16:36:13.001423 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-l777x" podStartSLOduration=3.001401405 podStartE2EDuration="3.001401405s" podCreationTimestamp="2026-01-23 16:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:36:12.996141281 +0000 UTC m=+1174.143383272" watchObservedRunningTime="2026-01-23 16:36:13.001401405 +0000 UTC m=+1174.148643396" Jan 23 16:36:13 crc kubenswrapper[4718]: I0123 16:36:13.010792 4718 scope.go:117] "RemoveContainer" containerID="1b655d0062dfc1b1d7f6aa468e8e985d15cc4c735b7ebf17e710450d7b86eb15" Jan 23 16:36:13 crc kubenswrapper[4718]: E0123 16:36:13.011428 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b655d0062dfc1b1d7f6aa468e8e985d15cc4c735b7ebf17e710450d7b86eb15\": container with ID starting with 1b655d0062dfc1b1d7f6aa468e8e985d15cc4c735b7ebf17e710450d7b86eb15 not found: ID does not exist" containerID="1b655d0062dfc1b1d7f6aa468e8e985d15cc4c735b7ebf17e710450d7b86eb15" Jan 23 16:36:13 crc kubenswrapper[4718]: I0123 16:36:13.011465 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b655d0062dfc1b1d7f6aa468e8e985d15cc4c735b7ebf17e710450d7b86eb15"} err="failed to get container status \"1b655d0062dfc1b1d7f6aa468e8e985d15cc4c735b7ebf17e710450d7b86eb15\": rpc error: code = NotFound desc = could not find container \"1b655d0062dfc1b1d7f6aa468e8e985d15cc4c735b7ebf17e710450d7b86eb15\": container with ID starting with 1b655d0062dfc1b1d7f6aa468e8e985d15cc4c735b7ebf17e710450d7b86eb15 not found: ID does not exist" Jan 23 16:36:13 crc kubenswrapper[4718]: I0123 16:36:13.011514 4718 scope.go:117] "RemoveContainer" containerID="21d750f8e7f11a9f9dda4098160cf3d6e9931b9e07ccd7545cd93951b27557a1" Jan 23 16:36:13 crc kubenswrapper[4718]: E0123 16:36:13.012015 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21d750f8e7f11a9f9dda4098160cf3d6e9931b9e07ccd7545cd93951b27557a1\": container with ID starting with 21d750f8e7f11a9f9dda4098160cf3d6e9931b9e07ccd7545cd93951b27557a1 not found: ID does not exist" containerID="21d750f8e7f11a9f9dda4098160cf3d6e9931b9e07ccd7545cd93951b27557a1" Jan 23 16:36:13 crc kubenswrapper[4718]: I0123 16:36:13.012085 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d750f8e7f11a9f9dda4098160cf3d6e9931b9e07ccd7545cd93951b27557a1"} err="failed to get container status \"21d750f8e7f11a9f9dda4098160cf3d6e9931b9e07ccd7545cd93951b27557a1\": rpc error: code = NotFound desc = could not find container \"21d750f8e7f11a9f9dda4098160cf3d6e9931b9e07ccd7545cd93951b27557a1\": container with ID starting with 21d750f8e7f11a9f9dda4098160cf3d6e9931b9e07ccd7545cd93951b27557a1 not found: ID does not exist" Jan 23 16:36:13 crc kubenswrapper[4718]: I0123 16:36:13.163095 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d43ef11-3b10-4dfd-981b-4d981157db0e" path="/var/lib/kubelet/pods/4d43ef11-3b10-4dfd-981b-4d981157db0e/volumes" Jan 23 16:36:13 crc kubenswrapper[4718]: I0123 16:36:13.164279 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ce9192-bc15-453c-a45d-c242b273bb74" path="/var/lib/kubelet/pods/a5ce9192-bc15-453c-a45d-c242b273bb74/volumes" Jan 23 16:36:13 crc kubenswrapper[4718]: I0123 16:36:13.626099 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 23 16:36:13 crc kubenswrapper[4718]: I0123 16:36:13.626564 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 23 16:36:13 crc kubenswrapper[4718]: I0123 16:36:13.722567 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 23 16:36:13 crc kubenswrapper[4718]: I0123 16:36:13.973508 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-w2vmx" event={"ID":"11cd1f3c-a5a5-468a-b175-5c025448f1a8","Type":"ContainerStarted","Data":"228fc8d5a1072c7892f6b79cca29d64f5555e7796777abc70f6fca94573475ec"} Jan 23 16:36:13 crc kubenswrapper[4718]: I0123 16:36:13.973681 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:13 crc kubenswrapper[4718]: I0123 16:36:13.997079 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-w2vmx" podStartSLOduration=3.997057535 podStartE2EDuration="3.997057535s" podCreationTimestamp="2026-01-23 16:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:36:13.993030225 +0000 UTC m=+1175.140272226" watchObservedRunningTime="2026-01-23 16:36:13.997057535 +0000 UTC m=+1175.144299526" Jan 23 16:36:14 crc kubenswrapper[4718]: I0123 16:36:14.057552 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 23 16:36:14 crc kubenswrapper[4718]: I0123 16:36:14.987999 4718 generic.go:334] "Generic (PLEG): container finished" podID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerID="62f23b39e112ce7efcce20f00cec1b8b83fbf3ccbf4e593270225b1ca91c68db" exitCode=0 Jan 23 16:36:14 crc kubenswrapper[4718]: I0123 16:36:14.988118 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"148cf8fd-f7c0-4355-a54b-ff8052293bc4","Type":"ContainerDied","Data":"62f23b39e112ce7efcce20f00cec1b8b83fbf3ccbf4e593270225b1ca91c68db"} Jan 23 16:36:14 crc kubenswrapper[4718]: I0123 16:36:14.992844 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"873a6275-b8f2-4554-9c4d-f44a6629111d","Type":"ContainerStarted","Data":"3c6fc5dd1e1f355432d4466be01dd0556394a6fb8e82984310648d0918eedab2"} Jan 23 16:36:14 crc kubenswrapper[4718]: I0123 16:36:14.994169 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 23 16:36:14 crc kubenswrapper[4718]: I0123 16:36:14.994190 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"873a6275-b8f2-4554-9c4d-f44a6629111d","Type":"ContainerStarted","Data":"c0a0e8e3c23369a912d78833c4455ba6e29e328faa28e68678a07b1da7df96bb"} Jan 23 16:36:15 crc kubenswrapper[4718]: I0123 16:36:15.080276 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.27474088 podStartE2EDuration="4.080256972s" podCreationTimestamp="2026-01-23 16:36:11 +0000 UTC" firstStartedPulling="2026-01-23 16:36:12.322891251 +0000 UTC m=+1173.470133242" lastFinishedPulling="2026-01-23 16:36:14.128407343 +0000 UTC m=+1175.275649334" observedRunningTime="2026-01-23 16:36:15.077127687 +0000 UTC m=+1176.224369698" watchObservedRunningTime="2026-01-23 16:36:15.080256972 +0000 UTC m=+1176.227498963" Jan 23 16:36:15 crc kubenswrapper[4718]: I0123 16:36:15.880277 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 16:36:15 crc kubenswrapper[4718]: I0123 16:36:15.904647 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-l777x"] Jan 23 16:36:15 crc kubenswrapper[4718]: I0123 16:36:15.982795 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-r87qz"] Jan 23 16:36:15 crc kubenswrapper[4718]: E0123 16:36:15.983407 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ce9192-bc15-453c-a45d-c242b273bb74" containerName="init" Jan 23 16:36:15 crc kubenswrapper[4718]: I0123 16:36:15.983430 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ce9192-bc15-453c-a45d-c242b273bb74" containerName="init" Jan 23 16:36:15 crc kubenswrapper[4718]: E0123 16:36:15.983461 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ce9192-bc15-453c-a45d-c242b273bb74" containerName="dnsmasq-dns" Jan 23 16:36:15 crc kubenswrapper[4718]: I0123 16:36:15.983473 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ce9192-bc15-453c-a45d-c242b273bb74" containerName="dnsmasq-dns" Jan 23 16:36:15 crc kubenswrapper[4718]: E0123 16:36:15.983491 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d43ef11-3b10-4dfd-981b-4d981157db0e" containerName="init" Jan 23 16:36:15 crc kubenswrapper[4718]: I0123 16:36:15.983500 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d43ef11-3b10-4dfd-981b-4d981157db0e" containerName="init" Jan 23 16:36:15 crc kubenswrapper[4718]: E0123 16:36:15.983521 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d43ef11-3b10-4dfd-981b-4d981157db0e" containerName="dnsmasq-dns" Jan 23 16:36:15 crc kubenswrapper[4718]: I0123 16:36:15.983529 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d43ef11-3b10-4dfd-981b-4d981157db0e" containerName="dnsmasq-dns" Jan 23 16:36:15 crc kubenswrapper[4718]: I0123 16:36:15.983783 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ce9192-bc15-453c-a45d-c242b273bb74" containerName="dnsmasq-dns" Jan 23 16:36:15 crc kubenswrapper[4718]: I0123 16:36:15.983811 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d43ef11-3b10-4dfd-981b-4d981157db0e" containerName="dnsmasq-dns" Jan 23 16:36:15 crc kubenswrapper[4718]: I0123 16:36:15.985180 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:15 crc kubenswrapper[4718]: I0123 16:36:15.998573 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-r87qz"] Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.007479 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-l777x" podUID="1cb00997-1b0e-4e89-ad5e-78655d0214db" containerName="dnsmasq-dns" containerID="cri-o://4b33ed99fcc7f366ac0280b285574bdc80b9a40e2ba59550ef81dddda818e9e2" gracePeriod=10 Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.197123 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-r87qz\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.197176 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-config\") pod \"dnsmasq-dns-b8fbc5445-r87qz\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.197220 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-r87qz\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.197286 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-r87qz\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.197377 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbpf8\" (UniqueName: \"kubernetes.io/projected/9914de17-33e2-4fea-a394-da364f4d8b43-kube-api-access-dbpf8\") pod \"dnsmasq-dns-b8fbc5445-r87qz\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.301077 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbpf8\" (UniqueName: \"kubernetes.io/projected/9914de17-33e2-4fea-a394-da364f4d8b43-kube-api-access-dbpf8\") pod \"dnsmasq-dns-b8fbc5445-r87qz\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.301299 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-r87qz\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.301336 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-config\") pod \"dnsmasq-dns-b8fbc5445-r87qz\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.301373 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-r87qz\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.301450 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-r87qz\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.307932 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-r87qz\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.308016 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-r87qz\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.308473 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-r87qz\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.308726 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-config\") pod \"dnsmasq-dns-b8fbc5445-r87qz\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.338181 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbpf8\" (UniqueName: \"kubernetes.io/projected/9914de17-33e2-4fea-a394-da364f4d8b43-kube-api-access-dbpf8\") pod \"dnsmasq-dns-b8fbc5445-r87qz\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.572895 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-l777x" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.631143 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.713813 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnrks\" (UniqueName: \"kubernetes.io/projected/1cb00997-1b0e-4e89-ad5e-78655d0214db-kube-api-access-xnrks\") pod \"1cb00997-1b0e-4e89-ad5e-78655d0214db\" (UID: \"1cb00997-1b0e-4e89-ad5e-78655d0214db\") " Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.713870 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb00997-1b0e-4e89-ad5e-78655d0214db-config\") pod \"1cb00997-1b0e-4e89-ad5e-78655d0214db\" (UID: \"1cb00997-1b0e-4e89-ad5e-78655d0214db\") " Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.713922 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb00997-1b0e-4e89-ad5e-78655d0214db-dns-svc\") pod \"1cb00997-1b0e-4e89-ad5e-78655d0214db\" (UID: \"1cb00997-1b0e-4e89-ad5e-78655d0214db\") " Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.714031 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb00997-1b0e-4e89-ad5e-78655d0214db-ovsdbserver-sb\") pod \"1cb00997-1b0e-4e89-ad5e-78655d0214db\" (UID: \"1cb00997-1b0e-4e89-ad5e-78655d0214db\") " Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.729804 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb00997-1b0e-4e89-ad5e-78655d0214db-kube-api-access-xnrks" (OuterVolumeSpecName: "kube-api-access-xnrks") pod "1cb00997-1b0e-4e89-ad5e-78655d0214db" (UID: "1cb00997-1b0e-4e89-ad5e-78655d0214db"). InnerVolumeSpecName "kube-api-access-xnrks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.776312 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb00997-1b0e-4e89-ad5e-78655d0214db-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1cb00997-1b0e-4e89-ad5e-78655d0214db" (UID: "1cb00997-1b0e-4e89-ad5e-78655d0214db"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.787401 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb00997-1b0e-4e89-ad5e-78655d0214db-config" (OuterVolumeSpecName: "config") pod "1cb00997-1b0e-4e89-ad5e-78655d0214db" (UID: "1cb00997-1b0e-4e89-ad5e-78655d0214db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.797152 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb00997-1b0e-4e89-ad5e-78655d0214db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1cb00997-1b0e-4e89-ad5e-78655d0214db" (UID: "1cb00997-1b0e-4e89-ad5e-78655d0214db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.816185 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnrks\" (UniqueName: \"kubernetes.io/projected/1cb00997-1b0e-4e89-ad5e-78655d0214db-kube-api-access-xnrks\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.816210 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb00997-1b0e-4e89-ad5e-78655d0214db-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.816221 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb00997-1b0e-4e89-ad5e-78655d0214db-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:16 crc kubenswrapper[4718]: I0123 16:36:16.816230 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb00997-1b0e-4e89-ad5e-78655d0214db-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.020820 4718 generic.go:334] "Generic (PLEG): container finished" podID="592a76d3-742f-47a0-9054-309fb2670fa3" containerID="91137858c80da0a1ba88154acdf9707d48f9103850d456e9ff054ce427723e74" exitCode=0 Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.020895 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"592a76d3-742f-47a0-9054-309fb2670fa3","Type":"ContainerDied","Data":"91137858c80da0a1ba88154acdf9707d48f9103850d456e9ff054ce427723e74"} Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.024205 4718 generic.go:334] "Generic (PLEG): container finished" podID="1cb00997-1b0e-4e89-ad5e-78655d0214db" containerID="4b33ed99fcc7f366ac0280b285574bdc80b9a40e2ba59550ef81dddda818e9e2" exitCode=0 Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.024289 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-l777x" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.024293 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-l777x" event={"ID":"1cb00997-1b0e-4e89-ad5e-78655d0214db","Type":"ContainerDied","Data":"4b33ed99fcc7f366ac0280b285574bdc80b9a40e2ba59550ef81dddda818e9e2"} Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.024371 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-l777x" event={"ID":"1cb00997-1b0e-4e89-ad5e-78655d0214db","Type":"ContainerDied","Data":"4ff212d06e567694d3ce7e990865e701de3c950afa91f912d12d71123017faf7"} Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.024418 4718 scope.go:117] "RemoveContainer" containerID="4b33ed99fcc7f366ac0280b285574bdc80b9a40e2ba59550ef81dddda818e9e2" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.096480 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 23 16:36:17 crc kubenswrapper[4718]: E0123 16:36:17.097050 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb00997-1b0e-4e89-ad5e-78655d0214db" containerName="init" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.097072 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb00997-1b0e-4e89-ad5e-78655d0214db" containerName="init" Jan 23 16:36:17 crc kubenswrapper[4718]: E0123 16:36:17.097092 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb00997-1b0e-4e89-ad5e-78655d0214db" containerName="dnsmasq-dns" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.097101 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb00997-1b0e-4e89-ad5e-78655d0214db" containerName="dnsmasq-dns" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.097374 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb00997-1b0e-4e89-ad5e-78655d0214db" containerName="dnsmasq-dns" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.107222 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-l777x"] Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.107360 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.110001 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.110041 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.110112 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-z4wbr" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.110288 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.113206 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-l777x"] Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.120866 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.131389 4718 scope.go:117] "RemoveContainer" containerID="50e5023062f383d24982b1d58875d056f480f21d5140055939528a8be9dc5928" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.162382 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb00997-1b0e-4e89-ad5e-78655d0214db" path="/var/lib/kubelet/pods/1cb00997-1b0e-4e89-ad5e-78655d0214db/volumes" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.171935 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-r87qz"] Jan 23 16:36:17 crc kubenswrapper[4718]: W0123 16:36:17.185725 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9914de17_33e2_4fea_a394_da364f4d8b43.slice/crio-ecd1792be5cf5217099a4af524b7c7059934ea00d20eeac89f17ad22d4c7ad2d WatchSource:0}: Error finding container ecd1792be5cf5217099a4af524b7c7059934ea00d20eeac89f17ad22d4c7ad2d: Status 404 returned error can't find the container with id ecd1792be5cf5217099a4af524b7c7059934ea00d20eeac89f17ad22d4c7ad2d Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.211104 4718 scope.go:117] "RemoveContainer" containerID="4b33ed99fcc7f366ac0280b285574bdc80b9a40e2ba59550ef81dddda818e9e2" Jan 23 16:36:17 crc kubenswrapper[4718]: E0123 16:36:17.211834 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b33ed99fcc7f366ac0280b285574bdc80b9a40e2ba59550ef81dddda818e9e2\": container with ID starting with 4b33ed99fcc7f366ac0280b285574bdc80b9a40e2ba59550ef81dddda818e9e2 not found: ID does not exist" containerID="4b33ed99fcc7f366ac0280b285574bdc80b9a40e2ba59550ef81dddda818e9e2" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.211890 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b33ed99fcc7f366ac0280b285574bdc80b9a40e2ba59550ef81dddda818e9e2"} err="failed to get container status \"4b33ed99fcc7f366ac0280b285574bdc80b9a40e2ba59550ef81dddda818e9e2\": rpc error: code = NotFound desc = could not find container \"4b33ed99fcc7f366ac0280b285574bdc80b9a40e2ba59550ef81dddda818e9e2\": container with ID starting with 4b33ed99fcc7f366ac0280b285574bdc80b9a40e2ba59550ef81dddda818e9e2 not found: ID does not exist" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.211922 4718 scope.go:117] "RemoveContainer" containerID="50e5023062f383d24982b1d58875d056f480f21d5140055939528a8be9dc5928" Jan 23 16:36:17 crc kubenswrapper[4718]: E0123 16:36:17.212486 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50e5023062f383d24982b1d58875d056f480f21d5140055939528a8be9dc5928\": container with ID starting with 50e5023062f383d24982b1d58875d056f480f21d5140055939528a8be9dc5928 not found: ID does not exist" containerID="50e5023062f383d24982b1d58875d056f480f21d5140055939528a8be9dc5928" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.212539 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e5023062f383d24982b1d58875d056f480f21d5140055939528a8be9dc5928"} err="failed to get container status \"50e5023062f383d24982b1d58875d056f480f21d5140055939528a8be9dc5928\": rpc error: code = NotFound desc = could not find container \"50e5023062f383d24982b1d58875d056f480f21d5140055939528a8be9dc5928\": container with ID starting with 50e5023062f383d24982b1d58875d056f480f21d5140055939528a8be9dc5928 not found: ID does not exist" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.226993 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gwqm\" (UniqueName: \"kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-kube-api-access-2gwqm\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.227440 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3383bbd9-d755-435c-9d57-c66c5cadaf09-lock\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.227504 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3383bbd9-d755-435c-9d57-c66c5cadaf09-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.227530 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3383bbd9-d755-435c-9d57-c66c5cadaf09-cache\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.227571 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.227616 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3d75f8a2-e8c0-4837-aeeb-05e3e188e1c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d75f8a2-e8c0-4837-aeeb-05e3e188e1c5\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.329405 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gwqm\" (UniqueName: \"kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-kube-api-access-2gwqm\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.329490 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3383bbd9-d755-435c-9d57-c66c5cadaf09-lock\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.329537 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3383bbd9-d755-435c-9d57-c66c5cadaf09-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.329553 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3383bbd9-d755-435c-9d57-c66c5cadaf09-cache\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.329593 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.329651 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3d75f8a2-e8c0-4837-aeeb-05e3e188e1c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d75f8a2-e8c0-4837-aeeb-05e3e188e1c5\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.330252 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3383bbd9-d755-435c-9d57-c66c5cadaf09-lock\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: E0123 16:36:17.330773 4718 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 16:36:17 crc kubenswrapper[4718]: E0123 16:36:17.330796 4718 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 16:36:17 crc kubenswrapper[4718]: E0123 16:36:17.330856 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift podName:3383bbd9-d755-435c-9d57-c66c5cadaf09 nodeName:}" failed. No retries permitted until 2026-01-23 16:36:17.830840865 +0000 UTC m=+1178.978082856 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift") pod "swift-storage-0" (UID: "3383bbd9-d755-435c-9d57-c66c5cadaf09") : configmap "swift-ring-files" not found Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.331112 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3383bbd9-d755-435c-9d57-c66c5cadaf09-cache\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.335745 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.335776 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3d75f8a2-e8c0-4837-aeeb-05e3e188e1c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d75f8a2-e8c0-4837-aeeb-05e3e188e1c5\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ac1fe233eb98ca29682d7a102199445c687bfee941d466928918710fdcca1a4/globalmount\"" pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.345677 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gwqm\" (UniqueName: \"kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-kube-api-access-2gwqm\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.372835 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3383bbd9-d755-435c-9d57-c66c5cadaf09-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.403486 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3d75f8a2-e8c0-4837-aeeb-05e3e188e1c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d75f8a2-e8c0-4837-aeeb-05e3e188e1c5\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: I0123 16:36:17.841246 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:17 crc kubenswrapper[4718]: E0123 16:36:17.841473 4718 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 16:36:17 crc kubenswrapper[4718]: E0123 16:36:17.841504 4718 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 16:36:17 crc kubenswrapper[4718]: E0123 16:36:17.841575 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift podName:3383bbd9-d755-435c-9d57-c66c5cadaf09 nodeName:}" failed. No retries permitted until 2026-01-23 16:36:18.84155427 +0000 UTC m=+1179.988796271 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift") pod "swift-storage-0" (UID: "3383bbd9-d755-435c-9d57-c66c5cadaf09") : configmap "swift-ring-files" not found Jan 23 16:36:18 crc kubenswrapper[4718]: I0123 16:36:18.039555 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"592a76d3-742f-47a0-9054-309fb2670fa3","Type":"ContainerStarted","Data":"de3238ed0cd9f0bc45c171a51a1f6391afdedc88f81c2fa5170239279425948b"} Jan 23 16:36:18 crc kubenswrapper[4718]: I0123 16:36:18.046666 4718 generic.go:334] "Generic (PLEG): container finished" podID="9914de17-33e2-4fea-a394-da364f4d8b43" containerID="d9014ad971d16a3996ce4065fd153bd2e7f43dbe6496ac69144209a47dec9109" exitCode=0 Jan 23 16:36:18 crc kubenswrapper[4718]: I0123 16:36:18.046728 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" event={"ID":"9914de17-33e2-4fea-a394-da364f4d8b43","Type":"ContainerDied","Data":"d9014ad971d16a3996ce4065fd153bd2e7f43dbe6496ac69144209a47dec9109"} Jan 23 16:36:18 crc kubenswrapper[4718]: I0123 16:36:18.046757 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" event={"ID":"9914de17-33e2-4fea-a394-da364f4d8b43","Type":"ContainerStarted","Data":"ecd1792be5cf5217099a4af524b7c7059934ea00d20eeac89f17ad22d4c7ad2d"} Jan 23 16:36:18 crc kubenswrapper[4718]: I0123 16:36:18.083082 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371988.771717 podStartE2EDuration="48.083059581s" podCreationTimestamp="2026-01-23 16:35:30 +0000 UTC" firstStartedPulling="2026-01-23 16:35:40.2084467 +0000 UTC m=+1141.355688681" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:36:18.070993083 +0000 UTC m=+1179.218235084" watchObservedRunningTime="2026-01-23 16:36:18.083059581 +0000 UTC m=+1179.230301572" Jan 23 16:36:18 crc kubenswrapper[4718]: I0123 16:36:18.863152 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:18 crc kubenswrapper[4718]: E0123 16:36:18.863610 4718 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 16:36:18 crc kubenswrapper[4718]: E0123 16:36:18.863758 4718 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 16:36:18 crc kubenswrapper[4718]: E0123 16:36:18.863937 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift podName:3383bbd9-d755-435c-9d57-c66c5cadaf09 nodeName:}" failed. No retries permitted until 2026-01-23 16:36:20.863869054 +0000 UTC m=+1182.011111065 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift") pod "swift-storage-0" (UID: "3383bbd9-d755-435c-9d57-c66c5cadaf09") : configmap "swift-ring-files" not found Jan 23 16:36:19 crc kubenswrapper[4718]: I0123 16:36:19.061128 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" event={"ID":"9914de17-33e2-4fea-a394-da364f4d8b43","Type":"ContainerStarted","Data":"e6b1e90a9e018b18b4c1a13925be18abc569ba987a0407d362a78180763c5032"} Jan 23 16:36:19 crc kubenswrapper[4718]: I0123 16:36:19.061411 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:19 crc kubenswrapper[4718]: I0123 16:36:19.089915 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" podStartSLOduration=4.089894474 podStartE2EDuration="4.089894474s" podCreationTimestamp="2026-01-23 16:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:36:19.079860641 +0000 UTC m=+1180.227102632" watchObservedRunningTime="2026-01-23 16:36:19.089894474 +0000 UTC m=+1180.237136465" Jan 23 16:36:20 crc kubenswrapper[4718]: I0123 16:36:20.906098 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:20 crc kubenswrapper[4718]: E0123 16:36:20.906446 4718 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 16:36:20 crc kubenswrapper[4718]: E0123 16:36:20.906704 4718 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 16:36:20 crc kubenswrapper[4718]: E0123 16:36:20.906813 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift podName:3383bbd9-d755-435c-9d57-c66c5cadaf09 nodeName:}" failed. No retries permitted until 2026-01-23 16:36:24.906766004 +0000 UTC m=+1186.054007995 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift") pod "swift-storage-0" (UID: "3383bbd9-d755-435c-9d57-c66c5cadaf09") : configmap "swift-ring-files" not found Jan 23 16:36:20 crc kubenswrapper[4718]: I0123 16:36:20.924516 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-v2fl4"] Jan 23 16:36:20 crc kubenswrapper[4718]: I0123 16:36:20.925941 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:20 crc kubenswrapper[4718]: I0123 16:36:20.928994 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 16:36:20 crc kubenswrapper[4718]: I0123 16:36:20.929282 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 23 16:36:20 crc kubenswrapper[4718]: I0123 16:36:20.933035 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 23 16:36:20 crc kubenswrapper[4718]: I0123 16:36:20.945079 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-v2fl4"] Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.010735 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9a040b54-6ee7-446b-83f1-b6b5c211ef43-swiftconf\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.010858 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9a040b54-6ee7-446b-83f1-b6b5c211ef43-ring-data-devices\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.010908 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9a040b54-6ee7-446b-83f1-b6b5c211ef43-etc-swift\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.011070 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a040b54-6ee7-446b-83f1-b6b5c211ef43-scripts\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.011093 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zkqp\" (UniqueName: \"kubernetes.io/projected/9a040b54-6ee7-446b-83f1-b6b5c211ef43-kube-api-access-4zkqp\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.011190 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a040b54-6ee7-446b-83f1-b6b5c211ef43-combined-ca-bundle\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.011257 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9a040b54-6ee7-446b-83f1-b6b5c211ef43-dispersionconf\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.097794 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.113550 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9a040b54-6ee7-446b-83f1-b6b5c211ef43-dispersionconf\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.113652 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9a040b54-6ee7-446b-83f1-b6b5c211ef43-swiftconf\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.113715 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9a040b54-6ee7-446b-83f1-b6b5c211ef43-ring-data-devices\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.113749 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9a040b54-6ee7-446b-83f1-b6b5c211ef43-etc-swift\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.114446 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9a040b54-6ee7-446b-83f1-b6b5c211ef43-etc-swift\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.114900 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9a040b54-6ee7-446b-83f1-b6b5c211ef43-ring-data-devices\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.114966 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a040b54-6ee7-446b-83f1-b6b5c211ef43-scripts\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.115012 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zkqp\" (UniqueName: \"kubernetes.io/projected/9a040b54-6ee7-446b-83f1-b6b5c211ef43-kube-api-access-4zkqp\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.115146 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a040b54-6ee7-446b-83f1-b6b5c211ef43-scripts\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.116022 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a040b54-6ee7-446b-83f1-b6b5c211ef43-combined-ca-bundle\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.122258 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a040b54-6ee7-446b-83f1-b6b5c211ef43-combined-ca-bundle\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.127889 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9a040b54-6ee7-446b-83f1-b6b5c211ef43-dispersionconf\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.140256 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9a040b54-6ee7-446b-83f1-b6b5c211ef43-swiftconf\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.147515 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zkqp\" (UniqueName: \"kubernetes.io/projected/9a040b54-6ee7-446b-83f1-b6b5c211ef43-kube-api-access-4zkqp\") pod \"swift-ring-rebalance-v2fl4\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:21 crc kubenswrapper[4718]: I0123 16:36:21.251358 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:24 crc kubenswrapper[4718]: I0123 16:36:21.714390 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 16:36:24 crc kubenswrapper[4718]: I0123 16:36:21.722103 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-v2fl4"] Jan 23 16:36:24 crc kubenswrapper[4718]: I0123 16:36:21.987375 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-szh5r"] Jan 23 16:36:24 crc kubenswrapper[4718]: I0123 16:36:21.989384 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-szh5r" Jan 23 16:36:24 crc kubenswrapper[4718]: I0123 16:36:21.994028 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 16:36:24 crc kubenswrapper[4718]: I0123 16:36:22.002564 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-szh5r"] Jan 23 16:36:24 crc kubenswrapper[4718]: I0123 16:36:22.034218 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 23 16:36:24 crc kubenswrapper[4718]: I0123 16:36:22.034295 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 23 16:36:24 crc kubenswrapper[4718]: I0123 16:36:22.038889 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/157ab1a3-a5c6-48e7-8b8a-d53e8607e191-operator-scripts\") pod \"root-account-create-update-szh5r\" (UID: \"157ab1a3-a5c6-48e7-8b8a-d53e8607e191\") " pod="openstack/root-account-create-update-szh5r" Jan 23 16:36:24 crc kubenswrapper[4718]: I0123 16:36:22.039217 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n98ts\" (UniqueName: \"kubernetes.io/projected/157ab1a3-a5c6-48e7-8b8a-d53e8607e191-kube-api-access-n98ts\") pod \"root-account-create-update-szh5r\" (UID: \"157ab1a3-a5c6-48e7-8b8a-d53e8607e191\") " pod="openstack/root-account-create-update-szh5r" Jan 23 16:36:24 crc kubenswrapper[4718]: I0123 16:36:22.093207 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v2fl4" event={"ID":"9a040b54-6ee7-446b-83f1-b6b5c211ef43","Type":"ContainerStarted","Data":"6628ff987ae67a93bf5901dcec336f5aec4dbab32a2324dc4ffb8ae4bf887a72"} Jan 23 16:36:24 crc kubenswrapper[4718]: I0123 16:36:22.140515 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n98ts\" (UniqueName: \"kubernetes.io/projected/157ab1a3-a5c6-48e7-8b8a-d53e8607e191-kube-api-access-n98ts\") pod \"root-account-create-update-szh5r\" (UID: \"157ab1a3-a5c6-48e7-8b8a-d53e8607e191\") " pod="openstack/root-account-create-update-szh5r" Jan 23 16:36:24 crc kubenswrapper[4718]: I0123 16:36:22.140615 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/157ab1a3-a5c6-48e7-8b8a-d53e8607e191-operator-scripts\") pod \"root-account-create-update-szh5r\" (UID: \"157ab1a3-a5c6-48e7-8b8a-d53e8607e191\") " pod="openstack/root-account-create-update-szh5r" Jan 23 16:36:24 crc kubenswrapper[4718]: I0123 16:36:22.141929 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/157ab1a3-a5c6-48e7-8b8a-d53e8607e191-operator-scripts\") pod \"root-account-create-update-szh5r\" (UID: \"157ab1a3-a5c6-48e7-8b8a-d53e8607e191\") " pod="openstack/root-account-create-update-szh5r" Jan 23 16:36:24 crc kubenswrapper[4718]: I0123 16:36:22.159124 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n98ts\" (UniqueName: \"kubernetes.io/projected/157ab1a3-a5c6-48e7-8b8a-d53e8607e191-kube-api-access-n98ts\") pod \"root-account-create-update-szh5r\" (UID: \"157ab1a3-a5c6-48e7-8b8a-d53e8607e191\") " pod="openstack/root-account-create-update-szh5r" Jan 23 16:36:24 crc kubenswrapper[4718]: I0123 16:36:22.328104 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-szh5r" Jan 23 16:36:25 crc kubenswrapper[4718]: I0123 16:36:25.004113 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:25 crc kubenswrapper[4718]: E0123 16:36:25.004376 4718 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 16:36:25 crc kubenswrapper[4718]: E0123 16:36:25.004608 4718 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 16:36:25 crc kubenswrapper[4718]: E0123 16:36:25.004674 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift podName:3383bbd9-d755-435c-9d57-c66c5cadaf09 nodeName:}" failed. No retries permitted until 2026-01-23 16:36:33.004658334 +0000 UTC m=+1194.151900325 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift") pod "swift-storage-0" (UID: "3383bbd9-d755-435c-9d57-c66c5cadaf09") : configmap "swift-ring-files" not found Jan 23 16:36:25 crc kubenswrapper[4718]: I0123 16:36:25.066349 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-szh5r"] Jan 23 16:36:26 crc kubenswrapper[4718]: I0123 16:36:26.128283 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 23 16:36:26 crc kubenswrapper[4718]: I0123 16:36:26.201547 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 23 16:36:26 crc kubenswrapper[4718]: I0123 16:36:26.632845 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:36:26 crc kubenswrapper[4718]: I0123 16:36:26.696351 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-w2vmx"] Jan 23 16:36:26 crc kubenswrapper[4718]: I0123 16:36:26.696605 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-w2vmx" podUID="11cd1f3c-a5a5-468a-b175-5c025448f1a8" containerName="dnsmasq-dns" containerID="cri-o://228fc8d5a1072c7892f6b79cca29d64f5555e7796777abc70f6fca94573475ec" gracePeriod=10 Jan 23 16:36:26 crc kubenswrapper[4718]: I0123 16:36:26.786677 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.172938 4718 generic.go:334] "Generic (PLEG): container finished" podID="11cd1f3c-a5a5-468a-b175-5c025448f1a8" containerID="228fc8d5a1072c7892f6b79cca29d64f5555e7796777abc70f6fca94573475ec" exitCode=0 Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.173018 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-w2vmx" event={"ID":"11cd1f3c-a5a5-468a-b175-5c025448f1a8","Type":"ContainerDied","Data":"228fc8d5a1072c7892f6b79cca29d64f5555e7796777abc70f6fca94573475ec"} Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.175869 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-szh5r" event={"ID":"157ab1a3-a5c6-48e7-8b8a-d53e8607e191","Type":"ContainerStarted","Data":"168cdc2c66c9f68b9693b4fa409147c6cc7b8ab762bc26018b96a30ba03a4bc5"} Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.203269 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.300585 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-ovsdbserver-nb\") pod \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.300916 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-dns-svc\") pod \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.300953 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-ovsdbserver-sb\") pod \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.301030 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lc97\" (UniqueName: \"kubernetes.io/projected/11cd1f3c-a5a5-468a-b175-5c025448f1a8-kube-api-access-4lc97\") pod \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.301701 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-config\") pod \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\" (UID: \"11cd1f3c-a5a5-468a-b175-5c025448f1a8\") " Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.311973 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11cd1f3c-a5a5-468a-b175-5c025448f1a8-kube-api-access-4lc97" (OuterVolumeSpecName: "kube-api-access-4lc97") pod "11cd1f3c-a5a5-468a-b175-5c025448f1a8" (UID: "11cd1f3c-a5a5-468a-b175-5c025448f1a8"). InnerVolumeSpecName "kube-api-access-4lc97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.368157 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11cd1f3c-a5a5-468a-b175-5c025448f1a8" (UID: "11cd1f3c-a5a5-468a-b175-5c025448f1a8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.375315 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "11cd1f3c-a5a5-468a-b175-5c025448f1a8" (UID: "11cd1f3c-a5a5-468a-b175-5c025448f1a8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.377207 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-config" (OuterVolumeSpecName: "config") pod "11cd1f3c-a5a5-468a-b175-5c025448f1a8" (UID: "11cd1f3c-a5a5-468a-b175-5c025448f1a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.391960 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "11cd1f3c-a5a5-468a-b175-5c025448f1a8" (UID: "11cd1f3c-a5a5-468a-b175-5c025448f1a8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.404295 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.404339 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.404355 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lc97\" (UniqueName: \"kubernetes.io/projected/11cd1f3c-a5a5-468a-b175-5c025448f1a8-kube-api-access-4lc97\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.404366 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.404375 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11cd1f3c-a5a5-468a-b175-5c025448f1a8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.853396 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-xgcbw"] Jan 23 16:36:28 crc kubenswrapper[4718]: E0123 16:36:28.854225 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11cd1f3c-a5a5-468a-b175-5c025448f1a8" containerName="dnsmasq-dns" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.854261 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="11cd1f3c-a5a5-468a-b175-5c025448f1a8" containerName="dnsmasq-dns" Jan 23 16:36:28 crc kubenswrapper[4718]: E0123 16:36:28.854292 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11cd1f3c-a5a5-468a-b175-5c025448f1a8" containerName="init" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.854303 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="11cd1f3c-a5a5-468a-b175-5c025448f1a8" containerName="init" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.854592 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="11cd1f3c-a5a5-468a-b175-5c025448f1a8" containerName="dnsmasq-dns" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.855908 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xgcbw" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.865909 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xgcbw"] Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.916445 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41974084-91dc-4daa-ba00-561e0d52e4c2-operator-scripts\") pod \"glance-db-create-xgcbw\" (UID: \"41974084-91dc-4daa-ba00-561e0d52e4c2\") " pod="openstack/glance-db-create-xgcbw" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.916548 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsxbr\" (UniqueName: \"kubernetes.io/projected/41974084-91dc-4daa-ba00-561e0d52e4c2-kube-api-access-vsxbr\") pod \"glance-db-create-xgcbw\" (UID: \"41974084-91dc-4daa-ba00-561e0d52e4c2\") " pod="openstack/glance-db-create-xgcbw" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.929787 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2c18-account-create-update-q8bm5"] Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.931406 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2c18-account-create-update-q8bm5" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.935131 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 23 16:36:28 crc kubenswrapper[4718]: I0123 16:36:28.938789 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2c18-account-create-update-q8bm5"] Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.022925 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41974084-91dc-4daa-ba00-561e0d52e4c2-operator-scripts\") pod \"glance-db-create-xgcbw\" (UID: \"41974084-91dc-4daa-ba00-561e0d52e4c2\") " pod="openstack/glance-db-create-xgcbw" Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.023545 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkh9j\" (UniqueName: \"kubernetes.io/projected/597decb2-86a1-400b-9635-cf0d9a10e643-kube-api-access-nkh9j\") pod \"glance-2c18-account-create-update-q8bm5\" (UID: \"597decb2-86a1-400b-9635-cf0d9a10e643\") " pod="openstack/glance-2c18-account-create-update-q8bm5" Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.023578 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/597decb2-86a1-400b-9635-cf0d9a10e643-operator-scripts\") pod \"glance-2c18-account-create-update-q8bm5\" (UID: \"597decb2-86a1-400b-9635-cf0d9a10e643\") " pod="openstack/glance-2c18-account-create-update-q8bm5" Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.023668 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsxbr\" (UniqueName: \"kubernetes.io/projected/41974084-91dc-4daa-ba00-561e0d52e4c2-kube-api-access-vsxbr\") pod \"glance-db-create-xgcbw\" (UID: \"41974084-91dc-4daa-ba00-561e0d52e4c2\") " pod="openstack/glance-db-create-xgcbw" Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.024916 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41974084-91dc-4daa-ba00-561e0d52e4c2-operator-scripts\") pod \"glance-db-create-xgcbw\" (UID: \"41974084-91dc-4daa-ba00-561e0d52e4c2\") " pod="openstack/glance-db-create-xgcbw" Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.070359 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsxbr\" (UniqueName: \"kubernetes.io/projected/41974084-91dc-4daa-ba00-561e0d52e4c2-kube-api-access-vsxbr\") pod \"glance-db-create-xgcbw\" (UID: \"41974084-91dc-4daa-ba00-561e0d52e4c2\") " pod="openstack/glance-db-create-xgcbw" Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.126160 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/597decb2-86a1-400b-9635-cf0d9a10e643-operator-scripts\") pod \"glance-2c18-account-create-update-q8bm5\" (UID: \"597decb2-86a1-400b-9635-cf0d9a10e643\") " pod="openstack/glance-2c18-account-create-update-q8bm5" Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.126213 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkh9j\" (UniqueName: \"kubernetes.io/projected/597decb2-86a1-400b-9635-cf0d9a10e643-kube-api-access-nkh9j\") pod \"glance-2c18-account-create-update-q8bm5\" (UID: \"597decb2-86a1-400b-9635-cf0d9a10e643\") " pod="openstack/glance-2c18-account-create-update-q8bm5" Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.127422 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/597decb2-86a1-400b-9635-cf0d9a10e643-operator-scripts\") pod \"glance-2c18-account-create-update-q8bm5\" (UID: \"597decb2-86a1-400b-9635-cf0d9a10e643\") " pod="openstack/glance-2c18-account-create-update-q8bm5" Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.184080 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkh9j\" (UniqueName: \"kubernetes.io/projected/597decb2-86a1-400b-9635-cf0d9a10e643-kube-api-access-nkh9j\") pod \"glance-2c18-account-create-update-q8bm5\" (UID: \"597decb2-86a1-400b-9635-cf0d9a10e643\") " pod="openstack/glance-2c18-account-create-update-q8bm5" Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.206101 4718 generic.go:334] "Generic (PLEG): container finished" podID="157ab1a3-a5c6-48e7-8b8a-d53e8607e191" containerID="2fef545cfa3d673bfcbf9a716e78e6633575248134cfd4d0a9eb86a7ef0eacd4" exitCode=0 Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.206172 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-szh5r" event={"ID":"157ab1a3-a5c6-48e7-8b8a-d53e8607e191","Type":"ContainerDied","Data":"2fef545cfa3d673bfcbf9a716e78e6633575248134cfd4d0a9eb86a7ef0eacd4"} Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.208537 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"148cf8fd-f7c0-4355-a54b-ff8052293bc4","Type":"ContainerStarted","Data":"84db3a5916a27ae74bb4bcf9237a619a6327726a16eb4d4ddd9aa2a66c198133"} Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.221819 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-w2vmx" event={"ID":"11cd1f3c-a5a5-468a-b175-5c025448f1a8","Type":"ContainerDied","Data":"3aa3cca13b9c61c9462ac1cc403cd06b6f004c4b3c0bc8dcd795b7334070770e"} Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.222107 4718 scope.go:117] "RemoveContainer" containerID="228fc8d5a1072c7892f6b79cca29d64f5555e7796777abc70f6fca94573475ec" Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.222170 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-w2vmx" Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.233942 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xgcbw" Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.260300 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2c18-account-create-update-q8bm5" Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.269137 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-w2vmx"] Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.283575 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-w2vmx"] Jan 23 16:36:29 crc kubenswrapper[4718]: I0123 16:36:29.864520 4718 scope.go:117] "RemoveContainer" containerID="67c7e1b8eb6df05840fee5a0f97e33615de2b6547cc7d4d5fd20d81d1a2d92ba" Jan 23 16:36:31 crc kubenswrapper[4718]: I0123 16:36:31.156594 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11cd1f3c-a5a5-468a-b175-5c025448f1a8" path="/var/lib/kubelet/pods/11cd1f3c-a5a5-468a-b175-5c025448f1a8/volumes" Jan 23 16:36:31 crc kubenswrapper[4718]: I0123 16:36:31.248450 4718 generic.go:334] "Generic (PLEG): container finished" podID="c94d7446-3c05-408a-a815-fe9adcb5e785" containerID="75dbeb38b5f16886cb08f46c9697dc967e0bf0d8c07b80289178d7452c3e8e49" exitCode=0 Jan 23 16:36:31 crc kubenswrapper[4718]: I0123 16:36:31.248492 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c94d7446-3c05-408a-a815-fe9adcb5e785","Type":"ContainerDied","Data":"75dbeb38b5f16886cb08f46c9697dc967e0bf0d8c07b80289178d7452c3e8e49"} Jan 23 16:36:31 crc kubenswrapper[4718]: I0123 16:36:31.801502 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-szh5r" Jan 23 16:36:31 crc kubenswrapper[4718]: I0123 16:36:31.830228 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n98ts\" (UniqueName: \"kubernetes.io/projected/157ab1a3-a5c6-48e7-8b8a-d53e8607e191-kube-api-access-n98ts\") pod \"157ab1a3-a5c6-48e7-8b8a-d53e8607e191\" (UID: \"157ab1a3-a5c6-48e7-8b8a-d53e8607e191\") " Jan 23 16:36:31 crc kubenswrapper[4718]: I0123 16:36:31.830330 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/157ab1a3-a5c6-48e7-8b8a-d53e8607e191-operator-scripts\") pod \"157ab1a3-a5c6-48e7-8b8a-d53e8607e191\" (UID: \"157ab1a3-a5c6-48e7-8b8a-d53e8607e191\") " Jan 23 16:36:31 crc kubenswrapper[4718]: I0123 16:36:31.831030 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/157ab1a3-a5c6-48e7-8b8a-d53e8607e191-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "157ab1a3-a5c6-48e7-8b8a-d53e8607e191" (UID: "157ab1a3-a5c6-48e7-8b8a-d53e8607e191"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:31 crc kubenswrapper[4718]: I0123 16:36:31.831180 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/157ab1a3-a5c6-48e7-8b8a-d53e8607e191-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:31 crc kubenswrapper[4718]: I0123 16:36:31.836345 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/157ab1a3-a5c6-48e7-8b8a-d53e8607e191-kube-api-access-n98ts" (OuterVolumeSpecName: "kube-api-access-n98ts") pod "157ab1a3-a5c6-48e7-8b8a-d53e8607e191" (UID: "157ab1a3-a5c6-48e7-8b8a-d53e8607e191"). InnerVolumeSpecName "kube-api-access-n98ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:31 crc kubenswrapper[4718]: I0123 16:36:31.928176 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xgcbw"] Jan 23 16:36:31 crc kubenswrapper[4718]: I0123 16:36:31.933838 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n98ts\" (UniqueName: \"kubernetes.io/projected/157ab1a3-a5c6-48e7-8b8a-d53e8607e191-kube-api-access-n98ts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.068063 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2c18-account-create-update-q8bm5"] Jan 23 16:36:32 crc kubenswrapper[4718]: W0123 16:36:32.071542 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod597decb2_86a1_400b_9635_cf0d9a10e643.slice/crio-68025e1e27c5f6966017ccf7031ae5a8aa3beb6438ca71061d6c414f5a9922c3 WatchSource:0}: Error finding container 68025e1e27c5f6966017ccf7031ae5a8aa3beb6438ca71061d6c414f5a9922c3: Status 404 returned error can't find the container with id 68025e1e27c5f6966017ccf7031ae5a8aa3beb6438ca71061d6c414f5a9922c3 Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.260481 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2c18-account-create-update-q8bm5" event={"ID":"597decb2-86a1-400b-9635-cf0d9a10e643","Type":"ContainerStarted","Data":"952b0ed2a6d63c7f0b6bb18ceab5cb740988dbaf86ab16f90f2fd3e720b5c503"} Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.260550 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2c18-account-create-update-q8bm5" event={"ID":"597decb2-86a1-400b-9635-cf0d9a10e643","Type":"ContainerStarted","Data":"68025e1e27c5f6966017ccf7031ae5a8aa3beb6438ca71061d6c414f5a9922c3"} Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.262658 4718 generic.go:334] "Generic (PLEG): container finished" podID="3b829751-ec51-4363-a796-fbf547cb8b6f" containerID="4ee5fb187d594a7dd22f54bcafa87a4a75cd20fc67e027fbd78b5f35074b51db" exitCode=0 Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.262723 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3b829751-ec51-4363-a796-fbf547cb8b6f","Type":"ContainerDied","Data":"4ee5fb187d594a7dd22f54bcafa87a4a75cd20fc67e027fbd78b5f35074b51db"} Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.268570 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"148cf8fd-f7c0-4355-a54b-ff8052293bc4","Type":"ContainerStarted","Data":"d6493d1966a9e258cb3a251aad8359e9a968e642581d2453e5c83eb60e975ea1"} Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.271939 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c94d7446-3c05-408a-a815-fe9adcb5e785","Type":"ContainerStarted","Data":"8bafde17a67e2578f6f414e8c50f1a75225e34ba0f99c2d3da5df349b12e460b"} Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.272217 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.274889 4718 generic.go:334] "Generic (PLEG): container finished" podID="39c8c979-1e2b-4757-9b14-3526451859e3" containerID="4fb04f6ffcae1daefee38a46630a8461b943b051c50de755500c97335de1b9db" exitCode=0 Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.274956 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"39c8c979-1e2b-4757-9b14-3526451859e3","Type":"ContainerDied","Data":"4fb04f6ffcae1daefee38a46630a8461b943b051c50de755500c97335de1b9db"} Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.277480 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xgcbw" event={"ID":"41974084-91dc-4daa-ba00-561e0d52e4c2","Type":"ContainerStarted","Data":"d6d42073e0ac2b8268bfc287145f498db5e903468e73417f5d2b289d90ce726e"} Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.277533 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xgcbw" event={"ID":"41974084-91dc-4daa-ba00-561e0d52e4c2","Type":"ContainerStarted","Data":"04fcb86502e44be17d34a7ec8b36beb8c6b2688ad3613d34354c202b9f9b60e1"} Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.283720 4718 generic.go:334] "Generic (PLEG): container finished" podID="f203e991-61a5-4809-bece-4d99f1e6b53a" containerID="57a7d3e23a91f61a08f1235702ca3507ea01f14b822f4e1690cf5b7da6226024" exitCode=0 Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.283863 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f203e991-61a5-4809-bece-4d99f1e6b53a","Type":"ContainerDied","Data":"57a7d3e23a91f61a08f1235702ca3507ea01f14b822f4e1690cf5b7da6226024"} Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.286124 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-szh5r" Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.286282 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-szh5r" event={"ID":"157ab1a3-a5c6-48e7-8b8a-d53e8607e191","Type":"ContainerDied","Data":"168cdc2c66c9f68b9693b4fa409147c6cc7b8ab762bc26018b96a30ba03a4bc5"} Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.286430 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="168cdc2c66c9f68b9693b4fa409147c6cc7b8ab762bc26018b96a30ba03a4bc5" Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.289467 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v2fl4" event={"ID":"9a040b54-6ee7-446b-83f1-b6b5c211ef43","Type":"ContainerStarted","Data":"c240afb98f0120232c162110fdd968a744587ed91ca2c710af414e13226272b6"} Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.291846 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-2c18-account-create-update-q8bm5" podStartSLOduration=4.291822687 podStartE2EDuration="4.291822687s" podCreationTimestamp="2026-01-23 16:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:36:32.283410889 +0000 UTC m=+1193.430652880" watchObservedRunningTime="2026-01-23 16:36:32.291822687 +0000 UTC m=+1193.439064678" Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.394917 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.563213624 podStartE2EDuration="1m3.394895768s" podCreationTimestamp="2026-01-23 16:35:29 +0000 UTC" firstStartedPulling="2026-01-23 16:35:31.405117505 +0000 UTC m=+1132.552359496" lastFinishedPulling="2026-01-23 16:35:57.236799649 +0000 UTC m=+1158.384041640" observedRunningTime="2026-01-23 16:36:32.391829684 +0000 UTC m=+1193.539071685" watchObservedRunningTime="2026-01-23 16:36:32.394895768 +0000 UTC m=+1193.542137759" Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.436275 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-xgcbw" podStartSLOduration=4.436248521 podStartE2EDuration="4.436248521s" podCreationTimestamp="2026-01-23 16:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:36:32.413299808 +0000 UTC m=+1193.560541799" watchObservedRunningTime="2026-01-23 16:36:32.436248521 +0000 UTC m=+1193.583490512" Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.846322 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-v2fl4" podStartSLOduration=3.085071233 podStartE2EDuration="12.846292651s" podCreationTimestamp="2026-01-23 16:36:20 +0000 UTC" firstStartedPulling="2026-01-23 16:36:21.713932502 +0000 UTC m=+1182.861174524" lastFinishedPulling="2026-01-23 16:36:31.475153951 +0000 UTC m=+1192.622395942" observedRunningTime="2026-01-23 16:36:32.478317274 +0000 UTC m=+1193.625559265" watchObservedRunningTime="2026-01-23 16:36:32.846292651 +0000 UTC m=+1193.993534642" Jan 23 16:36:32 crc kubenswrapper[4718]: I0123 16:36:32.913336 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-69ccdfb68b-l4gxm" podUID="ac65643e-309d-4ea6-a522-ab62f944c544" containerName="console" containerID="cri-o://99d2c014b42402883f9ab5f9734accca712e7d99a3945a0a1a02979c81eead3e" gracePeriod=15 Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.064035 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:33 crc kubenswrapper[4718]: E0123 16:36:33.064481 4718 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 16:36:33 crc kubenswrapper[4718]: E0123 16:36:33.064538 4718 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 16:36:33 crc kubenswrapper[4718]: E0123 16:36:33.064739 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift podName:3383bbd9-d755-435c-9d57-c66c5cadaf09 nodeName:}" failed. No retries permitted until 2026-01-23 16:36:49.064709265 +0000 UTC m=+1210.211951296 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift") pod "swift-storage-0" (UID: "3383bbd9-d755-435c-9d57-c66c5cadaf09") : configmap "swift-ring-files" not found Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.163121 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-g2mvr"] Jan 23 16:36:33 crc kubenswrapper[4718]: E0123 16:36:33.172728 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="157ab1a3-a5c6-48e7-8b8a-d53e8607e191" containerName="mariadb-account-create-update" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.173860 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="157ab1a3-a5c6-48e7-8b8a-d53e8607e191" containerName="mariadb-account-create-update" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.174483 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="157ab1a3-a5c6-48e7-8b8a-d53e8607e191" containerName="mariadb-account-create-update" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.177417 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g2mvr" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.194882 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-g2mvr"] Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.268526 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttpk9\" (UniqueName: \"kubernetes.io/projected/fe962884-b75c-4b1d-963e-6e6aeec3b1a5-kube-api-access-ttpk9\") pod \"keystone-db-create-g2mvr\" (UID: \"fe962884-b75c-4b1d-963e-6e6aeec3b1a5\") " pod="openstack/keystone-db-create-g2mvr" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.268730 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe962884-b75c-4b1d-963e-6e6aeec3b1a5-operator-scripts\") pod \"keystone-db-create-g2mvr\" (UID: \"fe962884-b75c-4b1d-963e-6e6aeec3b1a5\") " pod="openstack/keystone-db-create-g2mvr" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.273063 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-4677-account-create-update-cb4s5"] Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.275066 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4677-account-create-update-cb4s5" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.279167 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.299297 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4677-account-create-update-cb4s5"] Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.312269 4718 generic.go:334] "Generic (PLEG): container finished" podID="41974084-91dc-4daa-ba00-561e0d52e4c2" containerID="d6d42073e0ac2b8268bfc287145f498db5e903468e73417f5d2b289d90ce726e" exitCode=0 Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.312407 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xgcbw" event={"ID":"41974084-91dc-4daa-ba00-561e0d52e4c2","Type":"ContainerDied","Data":"d6d42073e0ac2b8268bfc287145f498db5e903468e73417f5d2b289d90ce726e"} Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.321185 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f203e991-61a5-4809-bece-4d99f1e6b53a","Type":"ContainerStarted","Data":"7e7a1fe473e3c13076474935b70cb3e949509a253a0683c2e8aefea8d06a2fba"} Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.322447 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.324883 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"39c8c979-1e2b-4757-9b14-3526451859e3","Type":"ContainerStarted","Data":"2d9a0fa0a267e3b9445c55b61de147cb4077e12a319d79f63c8fb73adcd42b82"} Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.325417 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.327405 4718 generic.go:334] "Generic (PLEG): container finished" podID="597decb2-86a1-400b-9635-cf0d9a10e643" containerID="952b0ed2a6d63c7f0b6bb18ceab5cb740988dbaf86ab16f90f2fd3e720b5c503" exitCode=0 Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.327457 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2c18-account-create-update-q8bm5" event={"ID":"597decb2-86a1-400b-9635-cf0d9a10e643","Type":"ContainerDied","Data":"952b0ed2a6d63c7f0b6bb18ceab5cb740988dbaf86ab16f90f2fd3e720b5c503"} Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.335674 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-69ccdfb68b-l4gxm_ac65643e-309d-4ea6-a522-ab62f944c544/console/0.log" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.335702 4718 generic.go:334] "Generic (PLEG): container finished" podID="ac65643e-309d-4ea6-a522-ab62f944c544" containerID="99d2c014b42402883f9ab5f9734accca712e7d99a3945a0a1a02979c81eead3e" exitCode=2 Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.335753 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69ccdfb68b-l4gxm" event={"ID":"ac65643e-309d-4ea6-a522-ab62f944c544","Type":"ContainerDied","Data":"99d2c014b42402883f9ab5f9734accca712e7d99a3945a0a1a02979c81eead3e"} Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.347267 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3b829751-ec51-4363-a796-fbf547cb8b6f","Type":"ContainerStarted","Data":"6c8509f2874fd88ca3b942fc341f4dfaf9f5147a532581171abba5681a692eb1"} Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.347779 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.395407 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/119f9654-89f5-40ae-b93a-ddde420e1a51-operator-scripts\") pod \"keystone-4677-account-create-update-cb4s5\" (UID: \"119f9654-89f5-40ae-b93a-ddde420e1a51\") " pod="openstack/keystone-4677-account-create-update-cb4s5" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.395509 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe962884-b75c-4b1d-963e-6e6aeec3b1a5-operator-scripts\") pod \"keystone-db-create-g2mvr\" (UID: \"fe962884-b75c-4b1d-963e-6e6aeec3b1a5\") " pod="openstack/keystone-db-create-g2mvr" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.399648 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe962884-b75c-4b1d-963e-6e6aeec3b1a5-operator-scripts\") pod \"keystone-db-create-g2mvr\" (UID: \"fe962884-b75c-4b1d-963e-6e6aeec3b1a5\") " pod="openstack/keystone-db-create-g2mvr" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.399791 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttpk9\" (UniqueName: \"kubernetes.io/projected/fe962884-b75c-4b1d-963e-6e6aeec3b1a5-kube-api-access-ttpk9\") pod \"keystone-db-create-g2mvr\" (UID: \"fe962884-b75c-4b1d-963e-6e6aeec3b1a5\") " pod="openstack/keystone-db-create-g2mvr" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.400142 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqcpb\" (UniqueName: \"kubernetes.io/projected/119f9654-89f5-40ae-b93a-ddde420e1a51-kube-api-access-wqcpb\") pod \"keystone-4677-account-create-update-cb4s5\" (UID: \"119f9654-89f5-40ae-b93a-ddde420e1a51\") " pod="openstack/keystone-4677-account-create-update-cb4s5" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.445804 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=38.634237131 podStartE2EDuration="1m4.445766927s" podCreationTimestamp="2026-01-23 16:35:29 +0000 UTC" firstStartedPulling="2026-01-23 16:35:31.515271798 +0000 UTC m=+1132.662513789" lastFinishedPulling="2026-01-23 16:35:57.326801594 +0000 UTC m=+1158.474043585" observedRunningTime="2026-01-23 16:36:33.369809384 +0000 UTC m=+1194.517051375" watchObservedRunningTime="2026-01-23 16:36:33.445766927 +0000 UTC m=+1194.593008928" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.455085 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttpk9\" (UniqueName: \"kubernetes.io/projected/fe962884-b75c-4b1d-963e-6e6aeec3b1a5-kube-api-access-ttpk9\") pod \"keystone-db-create-g2mvr\" (UID: \"fe962884-b75c-4b1d-963e-6e6aeec3b1a5\") " pod="openstack/keystone-db-create-g2mvr" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.502926 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.030375144 podStartE2EDuration="1m4.50289771s" podCreationTimestamp="2026-01-23 16:35:29 +0000 UTC" firstStartedPulling="2026-01-23 16:35:31.842610841 +0000 UTC m=+1132.989852832" lastFinishedPulling="2026-01-23 16:35:57.315133407 +0000 UTC m=+1158.462375398" observedRunningTime="2026-01-23 16:36:33.484767657 +0000 UTC m=+1194.632009658" watchObservedRunningTime="2026-01-23 16:36:33.50289771 +0000 UTC m=+1194.650139701" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.505817 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqcpb\" (UniqueName: \"kubernetes.io/projected/119f9654-89f5-40ae-b93a-ddde420e1a51-kube-api-access-wqcpb\") pod \"keystone-4677-account-create-update-cb4s5\" (UID: \"119f9654-89f5-40ae-b93a-ddde420e1a51\") " pod="openstack/keystone-4677-account-create-update-cb4s5" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.505951 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/119f9654-89f5-40ae-b93a-ddde420e1a51-operator-scripts\") pod \"keystone-4677-account-create-update-cb4s5\" (UID: \"119f9654-89f5-40ae-b93a-ddde420e1a51\") " pod="openstack/keystone-4677-account-create-update-cb4s5" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.514083 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g2mvr" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.516926 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/119f9654-89f5-40ae-b93a-ddde420e1a51-operator-scripts\") pod \"keystone-4677-account-create-update-cb4s5\" (UID: \"119f9654-89f5-40ae-b93a-ddde420e1a51\") " pod="openstack/keystone-4677-account-create-update-cb4s5" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.541718 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-psw8d"] Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.543991 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-psw8d" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.545287 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqcpb\" (UniqueName: \"kubernetes.io/projected/119f9654-89f5-40ae-b93a-ddde420e1a51-kube-api-access-wqcpb\") pod \"keystone-4677-account-create-update-cb4s5\" (UID: \"119f9654-89f5-40ae-b93a-ddde420e1a51\") " pod="openstack/keystone-4677-account-create-update-cb4s5" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.564543 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-psw8d"] Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.614184 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=38.942022443 podStartE2EDuration="1m4.614160942s" podCreationTimestamp="2026-01-23 16:35:29 +0000 UTC" firstStartedPulling="2026-01-23 16:35:31.655407305 +0000 UTC m=+1132.802649286" lastFinishedPulling="2026-01-23 16:35:57.327545794 +0000 UTC m=+1158.474787785" observedRunningTime="2026-01-23 16:36:33.588145295 +0000 UTC m=+1194.735387296" watchObservedRunningTime="2026-01-23 16:36:33.614160942 +0000 UTC m=+1194.761402933" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.623325 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4677-account-create-update-cb4s5" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.641860 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f265-account-create-update-kwnf2"] Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.643734 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f265-account-create-update-kwnf2" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.651140 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.670358 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f265-account-create-update-kwnf2"] Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.710316 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c70376de-08be-4cd5-a9b1-92366794a8ee-operator-scripts\") pod \"placement-f265-account-create-update-kwnf2\" (UID: \"c70376de-08be-4cd5-a9b1-92366794a8ee\") " pod="openstack/placement-f265-account-create-update-kwnf2" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.710395 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrjrl\" (UniqueName: \"kubernetes.io/projected/c70376de-08be-4cd5-a9b1-92366794a8ee-kube-api-access-zrjrl\") pod \"placement-f265-account-create-update-kwnf2\" (UID: \"c70376de-08be-4cd5-a9b1-92366794a8ee\") " pod="openstack/placement-f265-account-create-update-kwnf2" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.710435 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e538c6e-53f8-46c2-a8c0-8cd2562b70b1-operator-scripts\") pod \"placement-db-create-psw8d\" (UID: \"6e538c6e-53f8-46c2-a8c0-8cd2562b70b1\") " pod="openstack/placement-db-create-psw8d" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.710470 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqk8f\" (UniqueName: \"kubernetes.io/projected/6e538c6e-53f8-46c2-a8c0-8cd2562b70b1-kube-api-access-bqk8f\") pod \"placement-db-create-psw8d\" (UID: \"6e538c6e-53f8-46c2-a8c0-8cd2562b70b1\") " pod="openstack/placement-db-create-psw8d" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.802129 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-69ccdfb68b-l4gxm_ac65643e-309d-4ea6-a522-ab62f944c544/console/0.log" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.802212 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.812805 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c70376de-08be-4cd5-a9b1-92366794a8ee-operator-scripts\") pod \"placement-f265-account-create-update-kwnf2\" (UID: \"c70376de-08be-4cd5-a9b1-92366794a8ee\") " pod="openstack/placement-f265-account-create-update-kwnf2" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.812892 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrjrl\" (UniqueName: \"kubernetes.io/projected/c70376de-08be-4cd5-a9b1-92366794a8ee-kube-api-access-zrjrl\") pod \"placement-f265-account-create-update-kwnf2\" (UID: \"c70376de-08be-4cd5-a9b1-92366794a8ee\") " pod="openstack/placement-f265-account-create-update-kwnf2" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.812938 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e538c6e-53f8-46c2-a8c0-8cd2562b70b1-operator-scripts\") pod \"placement-db-create-psw8d\" (UID: \"6e538c6e-53f8-46c2-a8c0-8cd2562b70b1\") " pod="openstack/placement-db-create-psw8d" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.812971 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqk8f\" (UniqueName: \"kubernetes.io/projected/6e538c6e-53f8-46c2-a8c0-8cd2562b70b1-kube-api-access-bqk8f\") pod \"placement-db-create-psw8d\" (UID: \"6e538c6e-53f8-46c2-a8c0-8cd2562b70b1\") " pod="openstack/placement-db-create-psw8d" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.814781 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c70376de-08be-4cd5-a9b1-92366794a8ee-operator-scripts\") pod \"placement-f265-account-create-update-kwnf2\" (UID: \"c70376de-08be-4cd5-a9b1-92366794a8ee\") " pod="openstack/placement-f265-account-create-update-kwnf2" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.826194 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e538c6e-53f8-46c2-a8c0-8cd2562b70b1-operator-scripts\") pod \"placement-db-create-psw8d\" (UID: \"6e538c6e-53f8-46c2-a8c0-8cd2562b70b1\") " pod="openstack/placement-db-create-psw8d" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.889705 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqk8f\" (UniqueName: \"kubernetes.io/projected/6e538c6e-53f8-46c2-a8c0-8cd2562b70b1-kube-api-access-bqk8f\") pod \"placement-db-create-psw8d\" (UID: \"6e538c6e-53f8-46c2-a8c0-8cd2562b70b1\") " pod="openstack/placement-db-create-psw8d" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.900671 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrjrl\" (UniqueName: \"kubernetes.io/projected/c70376de-08be-4cd5-a9b1-92366794a8ee-kube-api-access-zrjrl\") pod \"placement-f265-account-create-update-kwnf2\" (UID: \"c70376de-08be-4cd5-a9b1-92366794a8ee\") " pod="openstack/placement-f265-account-create-update-kwnf2" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.913606 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac65643e-309d-4ea6-a522-ab62f944c544-console-oauth-config\") pod \"ac65643e-309d-4ea6-a522-ab62f944c544\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.913814 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-console-config\") pod \"ac65643e-309d-4ea6-a522-ab62f944c544\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.913875 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-service-ca\") pod \"ac65643e-309d-4ea6-a522-ab62f944c544\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.913921 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac65643e-309d-4ea6-a522-ab62f944c544-console-serving-cert\") pod \"ac65643e-309d-4ea6-a522-ab62f944c544\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.913982 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-trusted-ca-bundle\") pod \"ac65643e-309d-4ea6-a522-ab62f944c544\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.914031 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2stz\" (UniqueName: \"kubernetes.io/projected/ac65643e-309d-4ea6-a522-ab62f944c544-kube-api-access-s2stz\") pod \"ac65643e-309d-4ea6-a522-ab62f944c544\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.914067 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-oauth-serving-cert\") pod \"ac65643e-309d-4ea6-a522-ab62f944c544\" (UID: \"ac65643e-309d-4ea6-a522-ab62f944c544\") " Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.915122 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ac65643e-309d-4ea6-a522-ab62f944c544" (UID: "ac65643e-309d-4ea6-a522-ab62f944c544"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.915613 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ac65643e-309d-4ea6-a522-ab62f944c544" (UID: "ac65643e-309d-4ea6-a522-ab62f944c544"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.916351 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-service-ca" (OuterVolumeSpecName: "service-ca") pod "ac65643e-309d-4ea6-a522-ab62f944c544" (UID: "ac65643e-309d-4ea6-a522-ab62f944c544"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.916561 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-console-config" (OuterVolumeSpecName: "console-config") pod "ac65643e-309d-4ea6-a522-ab62f944c544" (UID: "ac65643e-309d-4ea6-a522-ab62f944c544"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.925566 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac65643e-309d-4ea6-a522-ab62f944c544-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ac65643e-309d-4ea6-a522-ab62f944c544" (UID: "ac65643e-309d-4ea6-a522-ab62f944c544"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.925863 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac65643e-309d-4ea6-a522-ab62f944c544-kube-api-access-s2stz" (OuterVolumeSpecName: "kube-api-access-s2stz") pod "ac65643e-309d-4ea6-a522-ab62f944c544" (UID: "ac65643e-309d-4ea6-a522-ab62f944c544"). InnerVolumeSpecName "kube-api-access-s2stz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:33 crc kubenswrapper[4718]: I0123 16:36:33.928833 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac65643e-309d-4ea6-a522-ab62f944c544-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ac65643e-309d-4ea6-a522-ab62f944c544" (UID: "ac65643e-309d-4ea6-a522-ab62f944c544"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.016938 4718 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.016975 4718 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.016985 4718 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac65643e-309d-4ea6-a522-ab62f944c544-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.016995 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.017007 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2stz\" (UniqueName: \"kubernetes.io/projected/ac65643e-309d-4ea6-a522-ab62f944c544-kube-api-access-s2stz\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.017019 4718 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac65643e-309d-4ea6-a522-ab62f944c544-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.017027 4718 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac65643e-309d-4ea6-a522-ab62f944c544-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.035409 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-psw8d" Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.041988 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f265-account-create-update-kwnf2" Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.191555 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-g2mvr"] Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.340803 4718 patch_prober.go:28] interesting pod/console-69ccdfb68b-l4gxm container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.89:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.341248 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-69ccdfb68b-l4gxm" podUID="ac65643e-309d-4ea6-a522-ab62f944c544" containerName="console" probeResult="failure" output="Get \"https://10.217.0.89:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.386870 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-69ccdfb68b-l4gxm_ac65643e-309d-4ea6-a522-ab62f944c544/console/0.log" Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.387052 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69ccdfb68b-l4gxm" Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.390973 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69ccdfb68b-l4gxm" event={"ID":"ac65643e-309d-4ea6-a522-ab62f944c544","Type":"ContainerDied","Data":"6eac19d948e947a00083ea470649bd4e822672c7994735ecf464e43cb7319aea"} Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.391056 4718 scope.go:117] "RemoveContainer" containerID="99d2c014b42402883f9ab5f9734accca712e7d99a3945a0a1a02979c81eead3e" Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.401973 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g2mvr" event={"ID":"fe962884-b75c-4b1d-963e-6e6aeec3b1a5","Type":"ContainerStarted","Data":"e7c6d68a7554563ee5e260f7eb77057cf0126dd786cac91cecb1e2537ae449b2"} Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.404898 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4677-account-create-update-cb4s5"] Jan 23 16:36:34 crc kubenswrapper[4718]: W0123 16:36:34.405136 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod119f9654_89f5_40ae_b93a_ddde420e1a51.slice/crio-403933d15cad4783d7cf89930ac90be7ab5a2bb3302192a4b1b313f7243fabe3 WatchSource:0}: Error finding container 403933d15cad4783d7cf89930ac90be7ab5a2bb3302192a4b1b313f7243fabe3: Status 404 returned error can't find the container with id 403933d15cad4783d7cf89930ac90be7ab5a2bb3302192a4b1b313f7243fabe3 Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.442108 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-69ccdfb68b-l4gxm"] Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.466546 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-69ccdfb68b-l4gxm"] Jan 23 16:36:34 crc kubenswrapper[4718]: I0123 16:36:34.962300 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-psw8d"] Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.020472 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f265-account-create-update-kwnf2"] Jan 23 16:36:35 crc kubenswrapper[4718]: W0123 16:36:35.044752 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc70376de_08be_4cd5_a9b1_92366794a8ee.slice/crio-9ac8c477557242ec7579601b2e7f29a9a5460abbb2466833876449a9c1dc22e8 WatchSource:0}: Error finding container 9ac8c477557242ec7579601b2e7f29a9a5460abbb2466833876449a9c1dc22e8: Status 404 returned error can't find the container with id 9ac8c477557242ec7579601b2e7f29a9a5460abbb2466833876449a9c1dc22e8 Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.159456 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac65643e-309d-4ea6-a522-ab62f944c544" path="/var/lib/kubelet/pods/ac65643e-309d-4ea6-a522-ab62f944c544/volumes" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.196938 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2c18-account-create-update-q8bm5" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.201598 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xgcbw" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.378544 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41974084-91dc-4daa-ba00-561e0d52e4c2-operator-scripts\") pod \"41974084-91dc-4daa-ba00-561e0d52e4c2\" (UID: \"41974084-91dc-4daa-ba00-561e0d52e4c2\") " Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.379245 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkh9j\" (UniqueName: \"kubernetes.io/projected/597decb2-86a1-400b-9635-cf0d9a10e643-kube-api-access-nkh9j\") pod \"597decb2-86a1-400b-9635-cf0d9a10e643\" (UID: \"597decb2-86a1-400b-9635-cf0d9a10e643\") " Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.379340 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsxbr\" (UniqueName: \"kubernetes.io/projected/41974084-91dc-4daa-ba00-561e0d52e4c2-kube-api-access-vsxbr\") pod \"41974084-91dc-4daa-ba00-561e0d52e4c2\" (UID: \"41974084-91dc-4daa-ba00-561e0d52e4c2\") " Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.379652 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/597decb2-86a1-400b-9635-cf0d9a10e643-operator-scripts\") pod \"597decb2-86a1-400b-9635-cf0d9a10e643\" (UID: \"597decb2-86a1-400b-9635-cf0d9a10e643\") " Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.379954 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41974084-91dc-4daa-ba00-561e0d52e4c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "41974084-91dc-4daa-ba00-561e0d52e4c2" (UID: "41974084-91dc-4daa-ba00-561e0d52e4c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.381218 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41974084-91dc-4daa-ba00-561e0d52e4c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.383019 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/597decb2-86a1-400b-9635-cf0d9a10e643-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "597decb2-86a1-400b-9635-cf0d9a10e643" (UID: "597decb2-86a1-400b-9635-cf0d9a10e643"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.397150 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41974084-91dc-4daa-ba00-561e0d52e4c2-kube-api-access-vsxbr" (OuterVolumeSpecName: "kube-api-access-vsxbr") pod "41974084-91dc-4daa-ba00-561e0d52e4c2" (UID: "41974084-91dc-4daa-ba00-561e0d52e4c2"). InnerVolumeSpecName "kube-api-access-vsxbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.398202 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/597decb2-86a1-400b-9635-cf0d9a10e643-kube-api-access-nkh9j" (OuterVolumeSpecName: "kube-api-access-nkh9j") pod "597decb2-86a1-400b-9635-cf0d9a10e643" (UID: "597decb2-86a1-400b-9635-cf0d9a10e643"). InnerVolumeSpecName "kube-api-access-nkh9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.418209 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2c18-account-create-update-q8bm5" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.418574 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2c18-account-create-update-q8bm5" event={"ID":"597decb2-86a1-400b-9635-cf0d9a10e643","Type":"ContainerDied","Data":"68025e1e27c5f6966017ccf7031ae5a8aa3beb6438ca71061d6c414f5a9922c3"} Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.418674 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68025e1e27c5f6966017ccf7031ae5a8aa3beb6438ca71061d6c414f5a9922c3" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.431170 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xgcbw" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.435034 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xgcbw" event={"ID":"41974084-91dc-4daa-ba00-561e0d52e4c2","Type":"ContainerDied","Data":"04fcb86502e44be17d34a7ec8b36beb8c6b2688ad3613d34354c202b9f9b60e1"} Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.435240 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04fcb86502e44be17d34a7ec8b36beb8c6b2688ad3613d34354c202b9f9b60e1" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.435838 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-c9rfg" podUID="d2e5ad0b-04cf-49a9-badc-9e3184385c5b" containerName="ovn-controller" probeResult="failure" output=< Jan 23 16:36:35 crc kubenswrapper[4718]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 16:36:35 crc kubenswrapper[4718]: > Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.437224 4718 generic.go:334] "Generic (PLEG): container finished" podID="fe962884-b75c-4b1d-963e-6e6aeec3b1a5" containerID="82afed912c063964ca1ded2ab35e314bf4063d1993a265971a9440df06fd26db" exitCode=0 Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.437719 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g2mvr" event={"ID":"fe962884-b75c-4b1d-963e-6e6aeec3b1a5","Type":"ContainerDied","Data":"82afed912c063964ca1ded2ab35e314bf4063d1993a265971a9440df06fd26db"} Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.447113 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-psw8d" event={"ID":"6e538c6e-53f8-46c2-a8c0-8cd2562b70b1","Type":"ContainerStarted","Data":"0de3444983dca25de35a01586dd8a71457b705626820255bb47848314ab05a8e"} Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.447163 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-psw8d" event={"ID":"6e538c6e-53f8-46c2-a8c0-8cd2562b70b1","Type":"ContainerStarted","Data":"40990be6484db1b73f02854c307a7e34a38ef4c1b247cffcfd86b38dbe5f5f2b"} Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.462763 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f265-account-create-update-kwnf2" event={"ID":"c70376de-08be-4cd5-a9b1-92366794a8ee","Type":"ContainerStarted","Data":"9ac8c477557242ec7579601b2e7f29a9a5460abbb2466833876449a9c1dc22e8"} Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.469810 4718 generic.go:334] "Generic (PLEG): container finished" podID="119f9654-89f5-40ae-b93a-ddde420e1a51" containerID="41e4d9e2a3c80de4fbbdc303b7cb60f4b62f9146f55c8af2bcc1df97d0838627" exitCode=0 Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.470109 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4677-account-create-update-cb4s5" event={"ID":"119f9654-89f5-40ae-b93a-ddde420e1a51","Type":"ContainerDied","Data":"41e4d9e2a3c80de4fbbdc303b7cb60f4b62f9146f55c8af2bcc1df97d0838627"} Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.470255 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4677-account-create-update-cb4s5" event={"ID":"119f9654-89f5-40ae-b93a-ddde420e1a51","Type":"ContainerStarted","Data":"403933d15cad4783d7cf89930ac90be7ab5a2bb3302192a4b1b313f7243fabe3"} Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.487085 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/597decb2-86a1-400b-9635-cf0d9a10e643-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.487131 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkh9j\" (UniqueName: \"kubernetes.io/projected/597decb2-86a1-400b-9635-cf0d9a10e643-kube-api-access-nkh9j\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.487154 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsxbr\" (UniqueName: \"kubernetes.io/projected/41974084-91dc-4daa-ba00-561e0d52e4c2-kube-api-access-vsxbr\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.503756 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-psw8d" podStartSLOduration=2.503726907 podStartE2EDuration="2.503726907s" podCreationTimestamp="2026-01-23 16:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:36:35.480823035 +0000 UTC m=+1196.628065036" watchObservedRunningTime="2026-01-23 16:36:35.503726907 +0000 UTC m=+1196.650968908" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.522649 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f265-account-create-update-kwnf2" podStartSLOduration=2.52259989 podStartE2EDuration="2.52259989s" podCreationTimestamp="2026-01-23 16:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:36:35.510933642 +0000 UTC m=+1196.658175633" watchObservedRunningTime="2026-01-23 16:36:35.52259989 +0000 UTC m=+1196.669841881" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.658213 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-f9t8l"] Jan 23 16:36:35 crc kubenswrapper[4718]: E0123 16:36:35.658834 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac65643e-309d-4ea6-a522-ab62f944c544" containerName="console" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.658855 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac65643e-309d-4ea6-a522-ab62f944c544" containerName="console" Jan 23 16:36:35 crc kubenswrapper[4718]: E0123 16:36:35.658900 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="597decb2-86a1-400b-9635-cf0d9a10e643" containerName="mariadb-account-create-update" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.658909 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="597decb2-86a1-400b-9635-cf0d9a10e643" containerName="mariadb-account-create-update" Jan 23 16:36:35 crc kubenswrapper[4718]: E0123 16:36:35.658921 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41974084-91dc-4daa-ba00-561e0d52e4c2" containerName="mariadb-database-create" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.658930 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="41974084-91dc-4daa-ba00-561e0d52e4c2" containerName="mariadb-database-create" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.659127 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="41974084-91dc-4daa-ba00-561e0d52e4c2" containerName="mariadb-database-create" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.659158 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="597decb2-86a1-400b-9635-cf0d9a10e643" containerName="mariadb-account-create-update" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.659174 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac65643e-309d-4ea6-a522-ab62f944c544" containerName="console" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.660062 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-f9t8l" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.684799 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-f9t8l"] Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.759894 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-szh5r"] Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.775233 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-szh5r"] Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.794896 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a728186-fd9a-4f7f-9dea-4d781141e899-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-f9t8l\" (UID: \"3a728186-fd9a-4f7f-9dea-4d781141e899\") " pod="openstack/mysqld-exporter-openstack-db-create-f9t8l" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.794973 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rxqr\" (UniqueName: \"kubernetes.io/projected/3a728186-fd9a-4f7f-9dea-4d781141e899-kube-api-access-7rxqr\") pod \"mysqld-exporter-openstack-db-create-f9t8l\" (UID: \"3a728186-fd9a-4f7f-9dea-4d781141e899\") " pod="openstack/mysqld-exporter-openstack-db-create-f9t8l" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.899468 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a728186-fd9a-4f7f-9dea-4d781141e899-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-f9t8l\" (UID: \"3a728186-fd9a-4f7f-9dea-4d781141e899\") " pod="openstack/mysqld-exporter-openstack-db-create-f9t8l" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.899536 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a728186-fd9a-4f7f-9dea-4d781141e899-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-f9t8l\" (UID: \"3a728186-fd9a-4f7f-9dea-4d781141e899\") " pod="openstack/mysqld-exporter-openstack-db-create-f9t8l" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.899608 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rxqr\" (UniqueName: \"kubernetes.io/projected/3a728186-fd9a-4f7f-9dea-4d781141e899-kube-api-access-7rxqr\") pod \"mysqld-exporter-openstack-db-create-f9t8l\" (UID: \"3a728186-fd9a-4f7f-9dea-4d781141e899\") " pod="openstack/mysqld-exporter-openstack-db-create-f9t8l" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.929290 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rxqr\" (UniqueName: \"kubernetes.io/projected/3a728186-fd9a-4f7f-9dea-4d781141e899-kube-api-access-7rxqr\") pod \"mysqld-exporter-openstack-db-create-f9t8l\" (UID: \"3a728186-fd9a-4f7f-9dea-4d781141e899\") " pod="openstack/mysqld-exporter-openstack-db-create-f9t8l" Jan 23 16:36:35 crc kubenswrapper[4718]: I0123 16:36:35.985047 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-f9t8l" Jan 23 16:36:36 crc kubenswrapper[4718]: I0123 16:36:36.066009 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-b01e-account-create-update-mrj22"] Jan 23 16:36:36 crc kubenswrapper[4718]: I0123 16:36:36.068030 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b01e-account-create-update-mrj22" Jan 23 16:36:36 crc kubenswrapper[4718]: I0123 16:36:36.073642 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Jan 23 16:36:36 crc kubenswrapper[4718]: I0123 16:36:36.081904 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-b01e-account-create-update-mrj22"] Jan 23 16:36:36 crc kubenswrapper[4718]: I0123 16:36:36.208534 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkn4q\" (UniqueName: \"kubernetes.io/projected/b3d7193d-e220-4a8a-bf50-4de2784c88ec-kube-api-access-qkn4q\") pod \"mysqld-exporter-b01e-account-create-update-mrj22\" (UID: \"b3d7193d-e220-4a8a-bf50-4de2784c88ec\") " pod="openstack/mysqld-exporter-b01e-account-create-update-mrj22" Jan 23 16:36:36 crc kubenswrapper[4718]: I0123 16:36:36.208689 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3d7193d-e220-4a8a-bf50-4de2784c88ec-operator-scripts\") pod \"mysqld-exporter-b01e-account-create-update-mrj22\" (UID: \"b3d7193d-e220-4a8a-bf50-4de2784c88ec\") " pod="openstack/mysqld-exporter-b01e-account-create-update-mrj22" Jan 23 16:36:36 crc kubenswrapper[4718]: I0123 16:36:36.312058 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkn4q\" (UniqueName: \"kubernetes.io/projected/b3d7193d-e220-4a8a-bf50-4de2784c88ec-kube-api-access-qkn4q\") pod \"mysqld-exporter-b01e-account-create-update-mrj22\" (UID: \"b3d7193d-e220-4a8a-bf50-4de2784c88ec\") " pod="openstack/mysqld-exporter-b01e-account-create-update-mrj22" Jan 23 16:36:36 crc kubenswrapper[4718]: I0123 16:36:36.312479 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3d7193d-e220-4a8a-bf50-4de2784c88ec-operator-scripts\") pod \"mysqld-exporter-b01e-account-create-update-mrj22\" (UID: \"b3d7193d-e220-4a8a-bf50-4de2784c88ec\") " pod="openstack/mysqld-exporter-b01e-account-create-update-mrj22" Jan 23 16:36:36 crc kubenswrapper[4718]: I0123 16:36:36.313353 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3d7193d-e220-4a8a-bf50-4de2784c88ec-operator-scripts\") pod \"mysqld-exporter-b01e-account-create-update-mrj22\" (UID: \"b3d7193d-e220-4a8a-bf50-4de2784c88ec\") " pod="openstack/mysqld-exporter-b01e-account-create-update-mrj22" Jan 23 16:36:36 crc kubenswrapper[4718]: I0123 16:36:36.347674 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkn4q\" (UniqueName: \"kubernetes.io/projected/b3d7193d-e220-4a8a-bf50-4de2784c88ec-kube-api-access-qkn4q\") pod \"mysqld-exporter-b01e-account-create-update-mrj22\" (UID: \"b3d7193d-e220-4a8a-bf50-4de2784c88ec\") " pod="openstack/mysqld-exporter-b01e-account-create-update-mrj22" Jan 23 16:36:36 crc kubenswrapper[4718]: I0123 16:36:36.483419 4718 generic.go:334] "Generic (PLEG): container finished" podID="6e538c6e-53f8-46c2-a8c0-8cd2562b70b1" containerID="0de3444983dca25de35a01586dd8a71457b705626820255bb47848314ab05a8e" exitCode=0 Jan 23 16:36:36 crc kubenswrapper[4718]: I0123 16:36:36.483496 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-psw8d" event={"ID":"6e538c6e-53f8-46c2-a8c0-8cd2562b70b1","Type":"ContainerDied","Data":"0de3444983dca25de35a01586dd8a71457b705626820255bb47848314ab05a8e"} Jan 23 16:36:36 crc kubenswrapper[4718]: I0123 16:36:36.485649 4718 generic.go:334] "Generic (PLEG): container finished" podID="c70376de-08be-4cd5-a9b1-92366794a8ee" containerID="ca0d56176215fb25f11e21766e61d34db17935168f4616ab4bcdb6811bd158ce" exitCode=0 Jan 23 16:36:36 crc kubenswrapper[4718]: I0123 16:36:36.485865 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f265-account-create-update-kwnf2" event={"ID":"c70376de-08be-4cd5-a9b1-92366794a8ee","Type":"ContainerDied","Data":"ca0d56176215fb25f11e21766e61d34db17935168f4616ab4bcdb6811bd158ce"} Jan 23 16:36:36 crc kubenswrapper[4718]: I0123 16:36:36.496928 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b01e-account-create-update-mrj22" Jan 23 16:36:36 crc kubenswrapper[4718]: I0123 16:36:36.625538 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-f9t8l"] Jan 23 16:36:37 crc kubenswrapper[4718]: I0123 16:36:37.113193 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-z7sjl"] Jan 23 16:36:37 crc kubenswrapper[4718]: I0123 16:36:37.115990 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z7sjl" Jan 23 16:36:37 crc kubenswrapper[4718]: I0123 16:36:37.118390 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-z7sjl"] Jan 23 16:36:37 crc kubenswrapper[4718]: I0123 16:36:37.118824 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 16:36:37 crc kubenswrapper[4718]: I0123 16:36:37.159139 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="157ab1a3-a5c6-48e7-8b8a-d53e8607e191" path="/var/lib/kubelet/pods/157ab1a3-a5c6-48e7-8b8a-d53e8607e191/volumes" Jan 23 16:36:37 crc kubenswrapper[4718]: I0123 16:36:37.223029 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-b01e-account-create-update-mrj22"] Jan 23 16:36:37 crc kubenswrapper[4718]: I0123 16:36:37.235564 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498a729a-9294-46f5-a3c8-151bafe1f33c-operator-scripts\") pod \"root-account-create-update-z7sjl\" (UID: \"498a729a-9294-46f5-a3c8-151bafe1f33c\") " pod="openstack/root-account-create-update-z7sjl" Jan 23 16:36:37 crc kubenswrapper[4718]: I0123 16:36:37.235766 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cggvc\" (UniqueName: \"kubernetes.io/projected/498a729a-9294-46f5-a3c8-151bafe1f33c-kube-api-access-cggvc\") pod \"root-account-create-update-z7sjl\" (UID: \"498a729a-9294-46f5-a3c8-151bafe1f33c\") " pod="openstack/root-account-create-update-z7sjl" Jan 23 16:36:37 crc kubenswrapper[4718]: I0123 16:36:37.338194 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498a729a-9294-46f5-a3c8-151bafe1f33c-operator-scripts\") pod \"root-account-create-update-z7sjl\" (UID: \"498a729a-9294-46f5-a3c8-151bafe1f33c\") " pod="openstack/root-account-create-update-z7sjl" Jan 23 16:36:37 crc kubenswrapper[4718]: I0123 16:36:37.338404 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cggvc\" (UniqueName: \"kubernetes.io/projected/498a729a-9294-46f5-a3c8-151bafe1f33c-kube-api-access-cggvc\") pod \"root-account-create-update-z7sjl\" (UID: \"498a729a-9294-46f5-a3c8-151bafe1f33c\") " pod="openstack/root-account-create-update-z7sjl" Jan 23 16:36:37 crc kubenswrapper[4718]: I0123 16:36:37.339593 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498a729a-9294-46f5-a3c8-151bafe1f33c-operator-scripts\") pod \"root-account-create-update-z7sjl\" (UID: \"498a729a-9294-46f5-a3c8-151bafe1f33c\") " pod="openstack/root-account-create-update-z7sjl" Jan 23 16:36:37 crc kubenswrapper[4718]: I0123 16:36:37.362509 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cggvc\" (UniqueName: \"kubernetes.io/projected/498a729a-9294-46f5-a3c8-151bafe1f33c-kube-api-access-cggvc\") pod \"root-account-create-update-z7sjl\" (UID: \"498a729a-9294-46f5-a3c8-151bafe1f33c\") " pod="openstack/root-account-create-update-z7sjl" Jan 23 16:36:37 crc kubenswrapper[4718]: I0123 16:36:37.455788 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z7sjl" Jan 23 16:36:37 crc kubenswrapper[4718]: I0123 16:36:37.500046 4718 generic.go:334] "Generic (PLEG): container finished" podID="3a728186-fd9a-4f7f-9dea-4d781141e899" containerID="3a9f3ecac3ac2af201ccdb1707e3672be5d151fade7a6dc5d2ea3f103e8f450c" exitCode=0 Jan 23 16:36:37 crc kubenswrapper[4718]: I0123 16:36:37.500199 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-f9t8l" event={"ID":"3a728186-fd9a-4f7f-9dea-4d781141e899","Type":"ContainerDied","Data":"3a9f3ecac3ac2af201ccdb1707e3672be5d151fade7a6dc5d2ea3f103e8f450c"} Jan 23 16:36:37 crc kubenswrapper[4718]: I0123 16:36:37.500255 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-f9t8l" event={"ID":"3a728186-fd9a-4f7f-9dea-4d781141e899","Type":"ContainerStarted","Data":"03e6551eced4515da8b2676de9019703c9f3fae41d659c15e5f4d7d79ce86099"} Jan 23 16:36:38 crc kubenswrapper[4718]: W0123 16:36:38.317104 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3d7193d_e220_4a8a_bf50_4de2784c88ec.slice/crio-339c753bcca2c85c214e2375913b4746aeb6f252ee93c3a933b3aecd4543c306 WatchSource:0}: Error finding container 339c753bcca2c85c214e2375913b4746aeb6f252ee93c3a933b3aecd4543c306: Status 404 returned error can't find the container with id 339c753bcca2c85c214e2375913b4746aeb6f252ee93c3a933b3aecd4543c306 Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.545806 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b01e-account-create-update-mrj22" event={"ID":"b3d7193d-e220-4a8a-bf50-4de2784c88ec","Type":"ContainerStarted","Data":"339c753bcca2c85c214e2375913b4746aeb6f252ee93c3a933b3aecd4543c306"} Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.552114 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f265-account-create-update-kwnf2" event={"ID":"c70376de-08be-4cd5-a9b1-92366794a8ee","Type":"ContainerDied","Data":"9ac8c477557242ec7579601b2e7f29a9a5460abbb2466833876449a9c1dc22e8"} Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.552165 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ac8c477557242ec7579601b2e7f29a9a5460abbb2466833876449a9c1dc22e8" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.560767 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4677-account-create-update-cb4s5" event={"ID":"119f9654-89f5-40ae-b93a-ddde420e1a51","Type":"ContainerDied","Data":"403933d15cad4783d7cf89930ac90be7ab5a2bb3302192a4b1b313f7243fabe3"} Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.560802 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="403933d15cad4783d7cf89930ac90be7ab5a2bb3302192a4b1b313f7243fabe3" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.568969 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g2mvr" event={"ID":"fe962884-b75c-4b1d-963e-6e6aeec3b1a5","Type":"ContainerDied","Data":"e7c6d68a7554563ee5e260f7eb77057cf0126dd786cac91cecb1e2537ae449b2"} Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.569027 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7c6d68a7554563ee5e260f7eb77057cf0126dd786cac91cecb1e2537ae449b2" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.571614 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-psw8d" event={"ID":"6e538c6e-53f8-46c2-a8c0-8cd2562b70b1","Type":"ContainerDied","Data":"40990be6484db1b73f02854c307a7e34a38ef4c1b247cffcfd86b38dbe5f5f2b"} Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.571674 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40990be6484db1b73f02854c307a7e34a38ef4c1b247cffcfd86b38dbe5f5f2b" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.593810 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-psw8d" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.627010 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4677-account-create-update-cb4s5" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.647750 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f265-account-create-update-kwnf2" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.664458 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g2mvr" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.790075 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c70376de-08be-4cd5-a9b1-92366794a8ee-operator-scripts\") pod \"c70376de-08be-4cd5-a9b1-92366794a8ee\" (UID: \"c70376de-08be-4cd5-a9b1-92366794a8ee\") " Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.790735 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrjrl\" (UniqueName: \"kubernetes.io/projected/c70376de-08be-4cd5-a9b1-92366794a8ee-kube-api-access-zrjrl\") pod \"c70376de-08be-4cd5-a9b1-92366794a8ee\" (UID: \"c70376de-08be-4cd5-a9b1-92366794a8ee\") " Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.790945 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttpk9\" (UniqueName: \"kubernetes.io/projected/fe962884-b75c-4b1d-963e-6e6aeec3b1a5-kube-api-access-ttpk9\") pod \"fe962884-b75c-4b1d-963e-6e6aeec3b1a5\" (UID: \"fe962884-b75c-4b1d-963e-6e6aeec3b1a5\") " Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.790981 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqcpb\" (UniqueName: \"kubernetes.io/projected/119f9654-89f5-40ae-b93a-ddde420e1a51-kube-api-access-wqcpb\") pod \"119f9654-89f5-40ae-b93a-ddde420e1a51\" (UID: \"119f9654-89f5-40ae-b93a-ddde420e1a51\") " Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.791033 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe962884-b75c-4b1d-963e-6e6aeec3b1a5-operator-scripts\") pod \"fe962884-b75c-4b1d-963e-6e6aeec3b1a5\" (UID: \"fe962884-b75c-4b1d-963e-6e6aeec3b1a5\") " Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.791054 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/119f9654-89f5-40ae-b93a-ddde420e1a51-operator-scripts\") pod \"119f9654-89f5-40ae-b93a-ddde420e1a51\" (UID: \"119f9654-89f5-40ae-b93a-ddde420e1a51\") " Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.791071 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqk8f\" (UniqueName: \"kubernetes.io/projected/6e538c6e-53f8-46c2-a8c0-8cd2562b70b1-kube-api-access-bqk8f\") pod \"6e538c6e-53f8-46c2-a8c0-8cd2562b70b1\" (UID: \"6e538c6e-53f8-46c2-a8c0-8cd2562b70b1\") " Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.791100 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e538c6e-53f8-46c2-a8c0-8cd2562b70b1-operator-scripts\") pod \"6e538c6e-53f8-46c2-a8c0-8cd2562b70b1\" (UID: \"6e538c6e-53f8-46c2-a8c0-8cd2562b70b1\") " Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.791351 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c70376de-08be-4cd5-a9b1-92366794a8ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c70376de-08be-4cd5-a9b1-92366794a8ee" (UID: "c70376de-08be-4cd5-a9b1-92366794a8ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.792013 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e538c6e-53f8-46c2-a8c0-8cd2562b70b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e538c6e-53f8-46c2-a8c0-8cd2562b70b1" (UID: "6e538c6e-53f8-46c2-a8c0-8cd2562b70b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.792498 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c70376de-08be-4cd5-a9b1-92366794a8ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.792525 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e538c6e-53f8-46c2-a8c0-8cd2562b70b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.794007 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/119f9654-89f5-40ae-b93a-ddde420e1a51-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "119f9654-89f5-40ae-b93a-ddde420e1a51" (UID: "119f9654-89f5-40ae-b93a-ddde420e1a51"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.794067 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe962884-b75c-4b1d-963e-6e6aeec3b1a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe962884-b75c-4b1d-963e-6e6aeec3b1a5" (UID: "fe962884-b75c-4b1d-963e-6e6aeec3b1a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.803837 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c70376de-08be-4cd5-a9b1-92366794a8ee-kube-api-access-zrjrl" (OuterVolumeSpecName: "kube-api-access-zrjrl") pod "c70376de-08be-4cd5-a9b1-92366794a8ee" (UID: "c70376de-08be-4cd5-a9b1-92366794a8ee"). InnerVolumeSpecName "kube-api-access-zrjrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.804028 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe962884-b75c-4b1d-963e-6e6aeec3b1a5-kube-api-access-ttpk9" (OuterVolumeSpecName: "kube-api-access-ttpk9") pod "fe962884-b75c-4b1d-963e-6e6aeec3b1a5" (UID: "fe962884-b75c-4b1d-963e-6e6aeec3b1a5"). InnerVolumeSpecName "kube-api-access-ttpk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.804319 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/119f9654-89f5-40ae-b93a-ddde420e1a51-kube-api-access-wqcpb" (OuterVolumeSpecName: "kube-api-access-wqcpb") pod "119f9654-89f5-40ae-b93a-ddde420e1a51" (UID: "119f9654-89f5-40ae-b93a-ddde420e1a51"). InnerVolumeSpecName "kube-api-access-wqcpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.804916 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e538c6e-53f8-46c2-a8c0-8cd2562b70b1-kube-api-access-bqk8f" (OuterVolumeSpecName: "kube-api-access-bqk8f") pod "6e538c6e-53f8-46c2-a8c0-8cd2562b70b1" (UID: "6e538c6e-53f8-46c2-a8c0-8cd2562b70b1"). InnerVolumeSpecName "kube-api-access-bqk8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.893759 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttpk9\" (UniqueName: \"kubernetes.io/projected/fe962884-b75c-4b1d-963e-6e6aeec3b1a5-kube-api-access-ttpk9\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.893786 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqcpb\" (UniqueName: \"kubernetes.io/projected/119f9654-89f5-40ae-b93a-ddde420e1a51-kube-api-access-wqcpb\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.893795 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe962884-b75c-4b1d-963e-6e6aeec3b1a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.893803 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqk8f\" (UniqueName: \"kubernetes.io/projected/6e538c6e-53f8-46c2-a8c0-8cd2562b70b1-kube-api-access-bqk8f\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.893815 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/119f9654-89f5-40ae-b93a-ddde420e1a51-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.893823 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrjrl\" (UniqueName: \"kubernetes.io/projected/c70376de-08be-4cd5-a9b1-92366794a8ee-kube-api-access-zrjrl\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:38 crc kubenswrapper[4718]: I0123 16:36:38.930547 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-z7sjl"] Jan 23 16:36:38 crc kubenswrapper[4718]: W0123 16:36:38.931436 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod498a729a_9294_46f5_a3c8_151bafe1f33c.slice/crio-72526446fbf243c2df1c86f804b6d8c0508c8f92ecbaac06748e1c41f606b20e WatchSource:0}: Error finding container 72526446fbf243c2df1c86f804b6d8c0508c8f92ecbaac06748e1c41f606b20e: Status 404 returned error can't find the container with id 72526446fbf243c2df1c86f804b6d8c0508c8f92ecbaac06748e1c41f606b20e Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.112076 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-f9t8l" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.241059 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-cl7vm"] Jan 23 16:36:39 crc kubenswrapper[4718]: E0123 16:36:39.242046 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe962884-b75c-4b1d-963e-6e6aeec3b1a5" containerName="mariadb-database-create" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.242071 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe962884-b75c-4b1d-963e-6e6aeec3b1a5" containerName="mariadb-database-create" Jan 23 16:36:39 crc kubenswrapper[4718]: E0123 16:36:39.242093 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="119f9654-89f5-40ae-b93a-ddde420e1a51" containerName="mariadb-account-create-update" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.242100 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="119f9654-89f5-40ae-b93a-ddde420e1a51" containerName="mariadb-account-create-update" Jan 23 16:36:39 crc kubenswrapper[4718]: E0123 16:36:39.242117 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c70376de-08be-4cd5-a9b1-92366794a8ee" containerName="mariadb-account-create-update" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.242123 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c70376de-08be-4cd5-a9b1-92366794a8ee" containerName="mariadb-account-create-update" Jan 23 16:36:39 crc kubenswrapper[4718]: E0123 16:36:39.242145 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e538c6e-53f8-46c2-a8c0-8cd2562b70b1" containerName="mariadb-database-create" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.242153 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e538c6e-53f8-46c2-a8c0-8cd2562b70b1" containerName="mariadb-database-create" Jan 23 16:36:39 crc kubenswrapper[4718]: E0123 16:36:39.242165 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a728186-fd9a-4f7f-9dea-4d781141e899" containerName="mariadb-database-create" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.242172 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a728186-fd9a-4f7f-9dea-4d781141e899" containerName="mariadb-database-create" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.242374 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c70376de-08be-4cd5-a9b1-92366794a8ee" containerName="mariadb-account-create-update" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.242390 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a728186-fd9a-4f7f-9dea-4d781141e899" containerName="mariadb-database-create" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.242396 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e538c6e-53f8-46c2-a8c0-8cd2562b70b1" containerName="mariadb-database-create" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.242409 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="119f9654-89f5-40ae-b93a-ddde420e1a51" containerName="mariadb-account-create-update" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.242423 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe962884-b75c-4b1d-963e-6e6aeec3b1a5" containerName="mariadb-database-create" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.243318 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cl7vm" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.248459 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zxlkc" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.249431 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.271203 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cl7vm"] Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.300512 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a728186-fd9a-4f7f-9dea-4d781141e899-operator-scripts\") pod \"3a728186-fd9a-4f7f-9dea-4d781141e899\" (UID: \"3a728186-fd9a-4f7f-9dea-4d781141e899\") " Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.300562 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rxqr\" (UniqueName: \"kubernetes.io/projected/3a728186-fd9a-4f7f-9dea-4d781141e899-kube-api-access-7rxqr\") pod \"3a728186-fd9a-4f7f-9dea-4d781141e899\" (UID: \"3a728186-fd9a-4f7f-9dea-4d781141e899\") " Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.301432 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a728186-fd9a-4f7f-9dea-4d781141e899-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3a728186-fd9a-4f7f-9dea-4d781141e899" (UID: "3a728186-fd9a-4f7f-9dea-4d781141e899"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.314898 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a728186-fd9a-4f7f-9dea-4d781141e899-kube-api-access-7rxqr" (OuterVolumeSpecName: "kube-api-access-7rxqr") pod "3a728186-fd9a-4f7f-9dea-4d781141e899" (UID: "3a728186-fd9a-4f7f-9dea-4d781141e899"). InnerVolumeSpecName "kube-api-access-7rxqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.403058 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-db-sync-config-data\") pod \"glance-db-sync-cl7vm\" (UID: \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\") " pod="openstack/glance-db-sync-cl7vm" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.403223 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pltf\" (UniqueName: \"kubernetes.io/projected/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-kube-api-access-7pltf\") pod \"glance-db-sync-cl7vm\" (UID: \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\") " pod="openstack/glance-db-sync-cl7vm" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.403430 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-combined-ca-bundle\") pod \"glance-db-sync-cl7vm\" (UID: \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\") " pod="openstack/glance-db-sync-cl7vm" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.403699 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-config-data\") pod \"glance-db-sync-cl7vm\" (UID: \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\") " pod="openstack/glance-db-sync-cl7vm" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.404678 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a728186-fd9a-4f7f-9dea-4d781141e899-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.404754 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rxqr\" (UniqueName: \"kubernetes.io/projected/3a728186-fd9a-4f7f-9dea-4d781141e899-kube-api-access-7rxqr\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.507727 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-combined-ca-bundle\") pod \"glance-db-sync-cl7vm\" (UID: \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\") " pod="openstack/glance-db-sync-cl7vm" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.508343 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-config-data\") pod \"glance-db-sync-cl7vm\" (UID: \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\") " pod="openstack/glance-db-sync-cl7vm" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.508961 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-db-sync-config-data\") pod \"glance-db-sync-cl7vm\" (UID: \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\") " pod="openstack/glance-db-sync-cl7vm" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.508994 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pltf\" (UniqueName: \"kubernetes.io/projected/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-kube-api-access-7pltf\") pod \"glance-db-sync-cl7vm\" (UID: \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\") " pod="openstack/glance-db-sync-cl7vm" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.512268 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-config-data\") pod \"glance-db-sync-cl7vm\" (UID: \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\") " pod="openstack/glance-db-sync-cl7vm" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.514503 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-combined-ca-bundle\") pod \"glance-db-sync-cl7vm\" (UID: \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\") " pod="openstack/glance-db-sync-cl7vm" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.516922 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-db-sync-config-data\") pod \"glance-db-sync-cl7vm\" (UID: \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\") " pod="openstack/glance-db-sync-cl7vm" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.529724 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pltf\" (UniqueName: \"kubernetes.io/projected/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-kube-api-access-7pltf\") pod \"glance-db-sync-cl7vm\" (UID: \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\") " pod="openstack/glance-db-sync-cl7vm" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.586247 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-f9t8l" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.586246 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-f9t8l" event={"ID":"3a728186-fd9a-4f7f-9dea-4d781141e899","Type":"ContainerDied","Data":"03e6551eced4515da8b2676de9019703c9f3fae41d659c15e5f4d7d79ce86099"} Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.586382 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03e6551eced4515da8b2676de9019703c9f3fae41d659c15e5f4d7d79ce86099" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.587707 4718 generic.go:334] "Generic (PLEG): container finished" podID="498a729a-9294-46f5-a3c8-151bafe1f33c" containerID="792984c57cf97fb79554ae2499a8cb577255140d675b09e9e971c940c797f7c9" exitCode=0 Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.587768 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z7sjl" event={"ID":"498a729a-9294-46f5-a3c8-151bafe1f33c","Type":"ContainerDied","Data":"792984c57cf97fb79554ae2499a8cb577255140d675b09e9e971c940c797f7c9"} Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.587785 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z7sjl" event={"ID":"498a729a-9294-46f5-a3c8-151bafe1f33c","Type":"ContainerStarted","Data":"72526446fbf243c2df1c86f804b6d8c0508c8f92ecbaac06748e1c41f606b20e"} Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.590974 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"148cf8fd-f7c0-4355-a54b-ff8052293bc4","Type":"ContainerStarted","Data":"27cf948be2d3073f63b16d346d2fa5cfb356943b5526c8b9aa8a69a7ece6b11b"} Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.592869 4718 generic.go:334] "Generic (PLEG): container finished" podID="b3d7193d-e220-4a8a-bf50-4de2784c88ec" containerID="3245b7740a83e8caa35b2fe68e40e5b3aa1ffdcfe5df7f82ff75fe116bd3bba4" exitCode=0 Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.592976 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g2mvr" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.593041 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4677-account-create-update-cb4s5" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.593117 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-psw8d" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.593213 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f265-account-create-update-kwnf2" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.593396 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b01e-account-create-update-mrj22" event={"ID":"b3d7193d-e220-4a8a-bf50-4de2784c88ec","Type":"ContainerDied","Data":"3245b7740a83e8caa35b2fe68e40e5b3aa1ffdcfe5df7f82ff75fe116bd3bba4"} Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.657043 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=24.389721098 podStartE2EDuration="1m4.657022471s" podCreationTimestamp="2026-01-23 16:35:35 +0000 UTC" firstStartedPulling="2026-01-23 16:35:58.140086529 +0000 UTC m=+1159.287328520" lastFinishedPulling="2026-01-23 16:36:38.407387882 +0000 UTC m=+1199.554629893" observedRunningTime="2026-01-23 16:36:39.649760615 +0000 UTC m=+1200.797002616" watchObservedRunningTime="2026-01-23 16:36:39.657022471 +0000 UTC m=+1200.804264452" Jan 23 16:36:39 crc kubenswrapper[4718]: I0123 16:36:39.726170 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cl7vm" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.402874 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-c9rfg" podUID="d2e5ad0b-04cf-49a9-badc-9e3184385c5b" containerName="ovn-controller" probeResult="failure" output=< Jan 23 16:36:41 crc kubenswrapper[4718]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 16:36:41 crc kubenswrapper[4718]: > Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.422954 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.424945 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-f2xs2" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.505081 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cl7vm"] Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.605566 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cl7vm" event={"ID":"fdefc9e4-c27d-4b53-a75a-5a74124d31f2","Type":"ContainerStarted","Data":"e82afc8df14bff7a3dd8ba613bd021e04f4614baba5249cc86b63639a0f58611"} Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.723740 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-c9rfg-config-grvhl"] Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.725235 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.727655 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.750897 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c9rfg-config-grvhl"] Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.855960 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fd3c510c-08f9-40af-9846-bb5e9964d30c-var-run-ovn\") pod \"ovn-controller-c9rfg-config-grvhl\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.856016 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77qbf\" (UniqueName: \"kubernetes.io/projected/fd3c510c-08f9-40af-9846-bb5e9964d30c-kube-api-access-77qbf\") pod \"ovn-controller-c9rfg-config-grvhl\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.856205 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd3c510c-08f9-40af-9846-bb5e9964d30c-scripts\") pod \"ovn-controller-c9rfg-config-grvhl\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.856720 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fd3c510c-08f9-40af-9846-bb5e9964d30c-var-run\") pod \"ovn-controller-c9rfg-config-grvhl\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.856763 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fd3c510c-08f9-40af-9846-bb5e9964d30c-var-log-ovn\") pod \"ovn-controller-c9rfg-config-grvhl\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.857165 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fd3c510c-08f9-40af-9846-bb5e9964d30c-additional-scripts\") pod \"ovn-controller-c9rfg-config-grvhl\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.959362 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd3c510c-08f9-40af-9846-bb5e9964d30c-scripts\") pod \"ovn-controller-c9rfg-config-grvhl\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.959473 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fd3c510c-08f9-40af-9846-bb5e9964d30c-var-run\") pod \"ovn-controller-c9rfg-config-grvhl\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.959495 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fd3c510c-08f9-40af-9846-bb5e9964d30c-var-log-ovn\") pod \"ovn-controller-c9rfg-config-grvhl\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.959566 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fd3c510c-08f9-40af-9846-bb5e9964d30c-additional-scripts\") pod \"ovn-controller-c9rfg-config-grvhl\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.959613 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fd3c510c-08f9-40af-9846-bb5e9964d30c-var-run-ovn\") pod \"ovn-controller-c9rfg-config-grvhl\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.959706 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77qbf\" (UniqueName: \"kubernetes.io/projected/fd3c510c-08f9-40af-9846-bb5e9964d30c-kube-api-access-77qbf\") pod \"ovn-controller-c9rfg-config-grvhl\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.960053 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fd3c510c-08f9-40af-9846-bb5e9964d30c-var-log-ovn\") pod \"ovn-controller-c9rfg-config-grvhl\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.960081 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fd3c510c-08f9-40af-9846-bb5e9964d30c-var-run-ovn\") pod \"ovn-controller-c9rfg-config-grvhl\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.960082 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fd3c510c-08f9-40af-9846-bb5e9964d30c-var-run\") pod \"ovn-controller-c9rfg-config-grvhl\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.960857 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fd3c510c-08f9-40af-9846-bb5e9964d30c-additional-scripts\") pod \"ovn-controller-c9rfg-config-grvhl\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.961959 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd3c510c-08f9-40af-9846-bb5e9964d30c-scripts\") pod \"ovn-controller-c9rfg-config-grvhl\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:40.983236 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77qbf\" (UniqueName: \"kubernetes.io/projected/fd3c510c-08f9-40af-9846-bb5e9964d30c-kube-api-access-77qbf\") pod \"ovn-controller-c9rfg-config-grvhl\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: I0123 16:36:41.046874 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:41 crc kubenswrapper[4718]: E0123 16:36:41.309173 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod119f9654_89f5_40ae_b93a_ddde420e1a51.slice/crio-403933d15cad4783d7cf89930ac90be7ab5a2bb3302192a4b1b313f7243fabe3\": RecentStats: unable to find data in memory cache]" Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.172189 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c9rfg-config-grvhl"] Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.175932 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z7sjl" Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.177875 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b01e-account-create-update-mrj22" Jan 23 16:36:42 crc kubenswrapper[4718]: W0123 16:36:42.181874 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd3c510c_08f9_40af_9846_bb5e9964d30c.slice/crio-e2b03af8bb30c496b3ed9e206c2ac3a2b9957a9dc13d5fe972841ba48dea85c0 WatchSource:0}: Error finding container e2b03af8bb30c496b3ed9e206c2ac3a2b9957a9dc13d5fe972841ba48dea85c0: Status 404 returned error can't find the container with id e2b03af8bb30c496b3ed9e206c2ac3a2b9957a9dc13d5fe972841ba48dea85c0 Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.294847 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3d7193d-e220-4a8a-bf50-4de2784c88ec-operator-scripts\") pod \"b3d7193d-e220-4a8a-bf50-4de2784c88ec\" (UID: \"b3d7193d-e220-4a8a-bf50-4de2784c88ec\") " Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.295050 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498a729a-9294-46f5-a3c8-151bafe1f33c-operator-scripts\") pod \"498a729a-9294-46f5-a3c8-151bafe1f33c\" (UID: \"498a729a-9294-46f5-a3c8-151bafe1f33c\") " Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.295179 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cggvc\" (UniqueName: \"kubernetes.io/projected/498a729a-9294-46f5-a3c8-151bafe1f33c-kube-api-access-cggvc\") pod \"498a729a-9294-46f5-a3c8-151bafe1f33c\" (UID: \"498a729a-9294-46f5-a3c8-151bafe1f33c\") " Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.295280 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkn4q\" (UniqueName: \"kubernetes.io/projected/b3d7193d-e220-4a8a-bf50-4de2784c88ec-kube-api-access-qkn4q\") pod \"b3d7193d-e220-4a8a-bf50-4de2784c88ec\" (UID: \"b3d7193d-e220-4a8a-bf50-4de2784c88ec\") " Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.295792 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3d7193d-e220-4a8a-bf50-4de2784c88ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3d7193d-e220-4a8a-bf50-4de2784c88ec" (UID: "b3d7193d-e220-4a8a-bf50-4de2784c88ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.295959 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/498a729a-9294-46f5-a3c8-151bafe1f33c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "498a729a-9294-46f5-a3c8-151bafe1f33c" (UID: "498a729a-9294-46f5-a3c8-151bafe1f33c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.298981 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498a729a-9294-46f5-a3c8-151bafe1f33c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.299249 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3d7193d-e220-4a8a-bf50-4de2784c88ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.305943 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3d7193d-e220-4a8a-bf50-4de2784c88ec-kube-api-access-qkn4q" (OuterVolumeSpecName: "kube-api-access-qkn4q") pod "b3d7193d-e220-4a8a-bf50-4de2784c88ec" (UID: "b3d7193d-e220-4a8a-bf50-4de2784c88ec"). InnerVolumeSpecName "kube-api-access-qkn4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.307129 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498a729a-9294-46f5-a3c8-151bafe1f33c-kube-api-access-cggvc" (OuterVolumeSpecName: "kube-api-access-cggvc") pod "498a729a-9294-46f5-a3c8-151bafe1f33c" (UID: "498a729a-9294-46f5-a3c8-151bafe1f33c"). InnerVolumeSpecName "kube-api-access-cggvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.401020 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cggvc\" (UniqueName: \"kubernetes.io/projected/498a729a-9294-46f5-a3c8-151bafe1f33c-kube-api-access-cggvc\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.401058 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkn4q\" (UniqueName: \"kubernetes.io/projected/b3d7193d-e220-4a8a-bf50-4de2784c88ec-kube-api-access-qkn4q\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.433051 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.629215 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b01e-account-create-update-mrj22" event={"ID":"b3d7193d-e220-4a8a-bf50-4de2784c88ec","Type":"ContainerDied","Data":"339c753bcca2c85c214e2375913b4746aeb6f252ee93c3a933b3aecd4543c306"} Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.629268 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="339c753bcca2c85c214e2375913b4746aeb6f252ee93c3a933b3aecd4543c306" Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.629268 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b01e-account-create-update-mrj22" Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.630725 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9rfg-config-grvhl" event={"ID":"fd3c510c-08f9-40af-9846-bb5e9964d30c","Type":"ContainerStarted","Data":"4bd3f777b687a07dc4b7d698c10c408f844582051d4a9818d0f5cd09e7e6a3f5"} Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.630761 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9rfg-config-grvhl" event={"ID":"fd3c510c-08f9-40af-9846-bb5e9964d30c","Type":"ContainerStarted","Data":"e2b03af8bb30c496b3ed9e206c2ac3a2b9957a9dc13d5fe972841ba48dea85c0"} Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.632797 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z7sjl" event={"ID":"498a729a-9294-46f5-a3c8-151bafe1f33c","Type":"ContainerDied","Data":"72526446fbf243c2df1c86f804b6d8c0508c8f92ecbaac06748e1c41f606b20e"} Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.632813 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z7sjl" Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.632833 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72526446fbf243c2df1c86f804b6d8c0508c8f92ecbaac06748e1c41f606b20e" Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.645488 4718 generic.go:334] "Generic (PLEG): container finished" podID="9a040b54-6ee7-446b-83f1-b6b5c211ef43" containerID="c240afb98f0120232c162110fdd968a744587ed91ca2c710af414e13226272b6" exitCode=0 Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.645538 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v2fl4" event={"ID":"9a040b54-6ee7-446b-83f1-b6b5c211ef43","Type":"ContainerDied","Data":"c240afb98f0120232c162110fdd968a744587ed91ca2c710af414e13226272b6"} Jan 23 16:36:42 crc kubenswrapper[4718]: I0123 16:36:42.656986 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-c9rfg-config-grvhl" podStartSLOduration=2.656967583 podStartE2EDuration="2.656967583s" podCreationTimestamp="2026-01-23 16:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:36:42.653015606 +0000 UTC m=+1203.800257617" watchObservedRunningTime="2026-01-23 16:36:42.656967583 +0000 UTC m=+1203.804209574" Jan 23 16:36:43 crc kubenswrapper[4718]: I0123 16:36:43.672236 4718 generic.go:334] "Generic (PLEG): container finished" podID="fd3c510c-08f9-40af-9846-bb5e9964d30c" containerID="4bd3f777b687a07dc4b7d698c10c408f844582051d4a9818d0f5cd09e7e6a3f5" exitCode=0 Jan 23 16:36:43 crc kubenswrapper[4718]: I0123 16:36:43.672832 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9rfg-config-grvhl" event={"ID":"fd3c510c-08f9-40af-9846-bb5e9964d30c","Type":"ContainerDied","Data":"4bd3f777b687a07dc4b7d698c10c408f844582051d4a9818d0f5cd09e7e6a3f5"} Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.178468 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.348420 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9a040b54-6ee7-446b-83f1-b6b5c211ef43-swiftconf\") pod \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.348867 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9a040b54-6ee7-446b-83f1-b6b5c211ef43-dispersionconf\") pod \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.349073 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a040b54-6ee7-446b-83f1-b6b5c211ef43-scripts\") pod \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.349175 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9a040b54-6ee7-446b-83f1-b6b5c211ef43-ring-data-devices\") pod \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.349241 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9a040b54-6ee7-446b-83f1-b6b5c211ef43-etc-swift\") pod \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.349316 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zkqp\" (UniqueName: \"kubernetes.io/projected/9a040b54-6ee7-446b-83f1-b6b5c211ef43-kube-api-access-4zkqp\") pod \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.349433 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a040b54-6ee7-446b-83f1-b6b5c211ef43-combined-ca-bundle\") pod \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\" (UID: \"9a040b54-6ee7-446b-83f1-b6b5c211ef43\") " Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.350683 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a040b54-6ee7-446b-83f1-b6b5c211ef43-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "9a040b54-6ee7-446b-83f1-b6b5c211ef43" (UID: "9a040b54-6ee7-446b-83f1-b6b5c211ef43"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.350875 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a040b54-6ee7-446b-83f1-b6b5c211ef43-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9a040b54-6ee7-446b-83f1-b6b5c211ef43" (UID: "9a040b54-6ee7-446b-83f1-b6b5c211ef43"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.371827 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a040b54-6ee7-446b-83f1-b6b5c211ef43-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "9a040b54-6ee7-446b-83f1-b6b5c211ef43" (UID: "9a040b54-6ee7-446b-83f1-b6b5c211ef43"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.399004 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a040b54-6ee7-446b-83f1-b6b5c211ef43-kube-api-access-4zkqp" (OuterVolumeSpecName: "kube-api-access-4zkqp") pod "9a040b54-6ee7-446b-83f1-b6b5c211ef43" (UID: "9a040b54-6ee7-446b-83f1-b6b5c211ef43"). InnerVolumeSpecName "kube-api-access-4zkqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.402271 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a040b54-6ee7-446b-83f1-b6b5c211ef43-scripts" (OuterVolumeSpecName: "scripts") pod "9a040b54-6ee7-446b-83f1-b6b5c211ef43" (UID: "9a040b54-6ee7-446b-83f1-b6b5c211ef43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.428530 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a040b54-6ee7-446b-83f1-b6b5c211ef43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a040b54-6ee7-446b-83f1-b6b5c211ef43" (UID: "9a040b54-6ee7-446b-83f1-b6b5c211ef43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.441651 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a040b54-6ee7-446b-83f1-b6b5c211ef43-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "9a040b54-6ee7-446b-83f1-b6b5c211ef43" (UID: "9a040b54-6ee7-446b-83f1-b6b5c211ef43"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.452570 4718 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9a040b54-6ee7-446b-83f1-b6b5c211ef43-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.452602 4718 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9a040b54-6ee7-446b-83f1-b6b5c211ef43-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.452613 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zkqp\" (UniqueName: \"kubernetes.io/projected/9a040b54-6ee7-446b-83f1-b6b5c211ef43-kube-api-access-4zkqp\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.452625 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a040b54-6ee7-446b-83f1-b6b5c211ef43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.452711 4718 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9a040b54-6ee7-446b-83f1-b6b5c211ef43-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.452721 4718 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9a040b54-6ee7-446b-83f1-b6b5c211ef43-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.452752 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a040b54-6ee7-446b-83f1-b6b5c211ef43-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.689654 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v2fl4" event={"ID":"9a040b54-6ee7-446b-83f1-b6b5c211ef43","Type":"ContainerDied","Data":"6628ff987ae67a93bf5901dcec336f5aec4dbab32a2324dc4ffb8ae4bf887a72"} Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.689750 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6628ff987ae67a93bf5901dcec336f5aec4dbab32a2324dc4ffb8ae4bf887a72" Jan 23 16:36:44 crc kubenswrapper[4718]: I0123 16:36:44.689706 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v2fl4" Jan 23 16:36:44 crc kubenswrapper[4718]: E0123 16:36:44.893816 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod119f9654_89f5_40ae_b93a_ddde420e1a51.slice/crio-403933d15cad4783d7cf89930ac90be7ab5a2bb3302192a4b1b313f7243fabe3\": RecentStats: unable to find data in memory cache]" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.112726 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.271976 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fd3c510c-08f9-40af-9846-bb5e9964d30c-var-run\") pod \"fd3c510c-08f9-40af-9846-bb5e9964d30c\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.272070 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77qbf\" (UniqueName: \"kubernetes.io/projected/fd3c510c-08f9-40af-9846-bb5e9964d30c-kube-api-access-77qbf\") pod \"fd3c510c-08f9-40af-9846-bb5e9964d30c\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.272129 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd3c510c-08f9-40af-9846-bb5e9964d30c-scripts\") pod \"fd3c510c-08f9-40af-9846-bb5e9964d30c\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.272198 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fd3c510c-08f9-40af-9846-bb5e9964d30c-var-run-ovn\") pod \"fd3c510c-08f9-40af-9846-bb5e9964d30c\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.272233 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fd3c510c-08f9-40af-9846-bb5e9964d30c-additional-scripts\") pod \"fd3c510c-08f9-40af-9846-bb5e9964d30c\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.272317 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fd3c510c-08f9-40af-9846-bb5e9964d30c-var-log-ovn\") pod \"fd3c510c-08f9-40af-9846-bb5e9964d30c\" (UID: \"fd3c510c-08f9-40af-9846-bb5e9964d30c\") " Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.273676 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd3c510c-08f9-40af-9846-bb5e9964d30c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "fd3c510c-08f9-40af-9846-bb5e9964d30c" (UID: "fd3c510c-08f9-40af-9846-bb5e9964d30c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.274884 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd3c510c-08f9-40af-9846-bb5e9964d30c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "fd3c510c-08f9-40af-9846-bb5e9964d30c" (UID: "fd3c510c-08f9-40af-9846-bb5e9964d30c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.274981 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd3c510c-08f9-40af-9846-bb5e9964d30c-var-run" (OuterVolumeSpecName: "var-run") pod "fd3c510c-08f9-40af-9846-bb5e9964d30c" (UID: "fd3c510c-08f9-40af-9846-bb5e9964d30c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.279060 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd3c510c-08f9-40af-9846-bb5e9964d30c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "fd3c510c-08f9-40af-9846-bb5e9964d30c" (UID: "fd3c510c-08f9-40af-9846-bb5e9964d30c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.279247 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd3c510c-08f9-40af-9846-bb5e9964d30c-scripts" (OuterVolumeSpecName: "scripts") pod "fd3c510c-08f9-40af-9846-bb5e9964d30c" (UID: "fd3c510c-08f9-40af-9846-bb5e9964d30c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.285605 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd3c510c-08f9-40af-9846-bb5e9964d30c-kube-api-access-77qbf" (OuterVolumeSpecName: "kube-api-access-77qbf") pod "fd3c510c-08f9-40af-9846-bb5e9964d30c" (UID: "fd3c510c-08f9-40af-9846-bb5e9964d30c"). InnerVolumeSpecName "kube-api-access-77qbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.345088 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-c9rfg-config-grvhl"] Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.358087 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-c9rfg-config-grvhl"] Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.370046 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-c9rfg" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.375039 4718 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fd3c510c-08f9-40af-9846-bb5e9964d30c-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.375069 4718 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fd3c510c-08f9-40af-9846-bb5e9964d30c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.375079 4718 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fd3c510c-08f9-40af-9846-bb5e9964d30c-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.375088 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77qbf\" (UniqueName: \"kubernetes.io/projected/fd3c510c-08f9-40af-9846-bb5e9964d30c-kube-api-access-77qbf\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.375097 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd3c510c-08f9-40af-9846-bb5e9964d30c-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.375105 4718 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fd3c510c-08f9-40af-9846-bb5e9964d30c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.418118 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-c9rfg-config-wbvfp"] Jan 23 16:36:45 crc kubenswrapper[4718]: E0123 16:36:45.418783 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd3c510c-08f9-40af-9846-bb5e9964d30c" containerName="ovn-config" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.418805 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd3c510c-08f9-40af-9846-bb5e9964d30c" containerName="ovn-config" Jan 23 16:36:45 crc kubenswrapper[4718]: E0123 16:36:45.418838 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a040b54-6ee7-446b-83f1-b6b5c211ef43" containerName="swift-ring-rebalance" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.418847 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a040b54-6ee7-446b-83f1-b6b5c211ef43" containerName="swift-ring-rebalance" Jan 23 16:36:45 crc kubenswrapper[4718]: E0123 16:36:45.418859 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498a729a-9294-46f5-a3c8-151bafe1f33c" containerName="mariadb-account-create-update" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.418866 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="498a729a-9294-46f5-a3c8-151bafe1f33c" containerName="mariadb-account-create-update" Jan 23 16:36:45 crc kubenswrapper[4718]: E0123 16:36:45.418884 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d7193d-e220-4a8a-bf50-4de2784c88ec" containerName="mariadb-account-create-update" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.418892 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d7193d-e220-4a8a-bf50-4de2784c88ec" containerName="mariadb-account-create-update" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.419113 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="498a729a-9294-46f5-a3c8-151bafe1f33c" containerName="mariadb-account-create-update" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.419134 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d7193d-e220-4a8a-bf50-4de2784c88ec" containerName="mariadb-account-create-update" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.419152 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a040b54-6ee7-446b-83f1-b6b5c211ef43" containerName="swift-ring-rebalance" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.419163 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd3c510c-08f9-40af-9846-bb5e9964d30c" containerName="ovn-config" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.420078 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.430975 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c9rfg-config-wbvfp"] Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.578971 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8786710-6c66-4f87-9f8b-f0b1874c46e4-var-run-ovn\") pod \"ovn-controller-c9rfg-config-wbvfp\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.579039 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7qsh\" (UniqueName: \"kubernetes.io/projected/d8786710-6c66-4f87-9f8b-f0b1874c46e4-kube-api-access-b7qsh\") pod \"ovn-controller-c9rfg-config-wbvfp\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.579143 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8786710-6c66-4f87-9f8b-f0b1874c46e4-scripts\") pod \"ovn-controller-c9rfg-config-wbvfp\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.579352 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d8786710-6c66-4f87-9f8b-f0b1874c46e4-var-run\") pod \"ovn-controller-c9rfg-config-wbvfp\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.579760 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d8786710-6c66-4f87-9f8b-f0b1874c46e4-var-log-ovn\") pod \"ovn-controller-c9rfg-config-wbvfp\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.579921 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d8786710-6c66-4f87-9f8b-f0b1874c46e4-additional-scripts\") pod \"ovn-controller-c9rfg-config-wbvfp\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.682172 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d8786710-6c66-4f87-9f8b-f0b1874c46e4-additional-scripts\") pod \"ovn-controller-c9rfg-config-wbvfp\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.683026 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d8786710-6c66-4f87-9f8b-f0b1874c46e4-additional-scripts\") pod \"ovn-controller-c9rfg-config-wbvfp\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.683214 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8786710-6c66-4f87-9f8b-f0b1874c46e4-var-run-ovn\") pod \"ovn-controller-c9rfg-config-wbvfp\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.683593 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7qsh\" (UniqueName: \"kubernetes.io/projected/d8786710-6c66-4f87-9f8b-f0b1874c46e4-kube-api-access-b7qsh\") pod \"ovn-controller-c9rfg-config-wbvfp\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.683544 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8786710-6c66-4f87-9f8b-f0b1874c46e4-var-run-ovn\") pod \"ovn-controller-c9rfg-config-wbvfp\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.684080 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8786710-6c66-4f87-9f8b-f0b1874c46e4-scripts\") pod \"ovn-controller-c9rfg-config-wbvfp\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.686179 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8786710-6c66-4f87-9f8b-f0b1874c46e4-scripts\") pod \"ovn-controller-c9rfg-config-wbvfp\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.687036 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d8786710-6c66-4f87-9f8b-f0b1874c46e4-var-run\") pod \"ovn-controller-c9rfg-config-wbvfp\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.687135 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d8786710-6c66-4f87-9f8b-f0b1874c46e4-var-run\") pod \"ovn-controller-c9rfg-config-wbvfp\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.687305 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d8786710-6c66-4f87-9f8b-f0b1874c46e4-var-log-ovn\") pod \"ovn-controller-c9rfg-config-wbvfp\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.687400 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d8786710-6c66-4f87-9f8b-f0b1874c46e4-var-log-ovn\") pod \"ovn-controller-c9rfg-config-wbvfp\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.702254 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2b03af8bb30c496b3ed9e206c2ac3a2b9957a9dc13d5fe972841ba48dea85c0" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.702342 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9rfg-config-grvhl" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.713087 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7qsh\" (UniqueName: \"kubernetes.io/projected/d8786710-6c66-4f87-9f8b-f0b1874c46e4-kube-api-access-b7qsh\") pod \"ovn-controller-c9rfg-config-wbvfp\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.747782 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.805343 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-z7sjl"] Jan 23 16:36:45 crc kubenswrapper[4718]: I0123 16:36:45.818169 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-z7sjl"] Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.315678 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-f79tn"] Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.318351 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-f79tn" Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.327728 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c9rfg-config-wbvfp"] Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.339933 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-f79tn"] Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.409714 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc06a1f0-3166-4c08-bdef-e50e7ec1829f-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-f79tn\" (UID: \"bc06a1f0-3166-4c08-bdef-e50e7ec1829f\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-f79tn" Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.409883 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6wkj\" (UniqueName: \"kubernetes.io/projected/bc06a1f0-3166-4c08-bdef-e50e7ec1829f-kube-api-access-s6wkj\") pod \"mysqld-exporter-openstack-cell1-db-create-f79tn\" (UID: \"bc06a1f0-3166-4c08-bdef-e50e7ec1829f\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-f79tn" Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.513976 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc06a1f0-3166-4c08-bdef-e50e7ec1829f-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-f79tn\" (UID: \"bc06a1f0-3166-4c08-bdef-e50e7ec1829f\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-f79tn" Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.514299 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6wkj\" (UniqueName: \"kubernetes.io/projected/bc06a1f0-3166-4c08-bdef-e50e7ec1829f-kube-api-access-s6wkj\") pod \"mysqld-exporter-openstack-cell1-db-create-f79tn\" (UID: \"bc06a1f0-3166-4c08-bdef-e50e7ec1829f\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-f79tn" Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.514865 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc06a1f0-3166-4c08-bdef-e50e7ec1829f-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-f79tn\" (UID: \"bc06a1f0-3166-4c08-bdef-e50e7ec1829f\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-f79tn" Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.528457 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-1b7b-account-create-update-lnclr"] Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.530760 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1b7b-account-create-update-lnclr" Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.532608 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.540611 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6wkj\" (UniqueName: \"kubernetes.io/projected/bc06a1f0-3166-4c08-bdef-e50e7ec1829f-kube-api-access-s6wkj\") pod \"mysqld-exporter-openstack-cell1-db-create-f79tn\" (UID: \"bc06a1f0-3166-4c08-bdef-e50e7ec1829f\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-f79tn" Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.553284 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-1b7b-account-create-update-lnclr"] Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.618348 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsl5l\" (UniqueName: \"kubernetes.io/projected/4a487cdb-2762-422f-bdb5-df30a8ce6f20-kube-api-access-fsl5l\") pod \"mysqld-exporter-1b7b-account-create-update-lnclr\" (UID: \"4a487cdb-2762-422f-bdb5-df30a8ce6f20\") " pod="openstack/mysqld-exporter-1b7b-account-create-update-lnclr" Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.619959 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a487cdb-2762-422f-bdb5-df30a8ce6f20-operator-scripts\") pod \"mysqld-exporter-1b7b-account-create-update-lnclr\" (UID: \"4a487cdb-2762-422f-bdb5-df30a8ce6f20\") " pod="openstack/mysqld-exporter-1b7b-account-create-update-lnclr" Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.693030 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-f79tn" Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.715991 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9rfg-config-wbvfp" event={"ID":"d8786710-6c66-4f87-9f8b-f0b1874c46e4","Type":"ContainerStarted","Data":"651a8638af070b8ed037d4a7b6add8c72c9aed2f8783e9750052685c3c84d455"} Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.723913 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsl5l\" (UniqueName: \"kubernetes.io/projected/4a487cdb-2762-422f-bdb5-df30a8ce6f20-kube-api-access-fsl5l\") pod \"mysqld-exporter-1b7b-account-create-update-lnclr\" (UID: \"4a487cdb-2762-422f-bdb5-df30a8ce6f20\") " pod="openstack/mysqld-exporter-1b7b-account-create-update-lnclr" Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.724107 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a487cdb-2762-422f-bdb5-df30a8ce6f20-operator-scripts\") pod \"mysqld-exporter-1b7b-account-create-update-lnclr\" (UID: \"4a487cdb-2762-422f-bdb5-df30a8ce6f20\") " pod="openstack/mysqld-exporter-1b7b-account-create-update-lnclr" Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.725558 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a487cdb-2762-422f-bdb5-df30a8ce6f20-operator-scripts\") pod \"mysqld-exporter-1b7b-account-create-update-lnclr\" (UID: \"4a487cdb-2762-422f-bdb5-df30a8ce6f20\") " pod="openstack/mysqld-exporter-1b7b-account-create-update-lnclr" Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.742567 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsl5l\" (UniqueName: \"kubernetes.io/projected/4a487cdb-2762-422f-bdb5-df30a8ce6f20-kube-api-access-fsl5l\") pod \"mysqld-exporter-1b7b-account-create-update-lnclr\" (UID: \"4a487cdb-2762-422f-bdb5-df30a8ce6f20\") " pod="openstack/mysqld-exporter-1b7b-account-create-update-lnclr" Jan 23 16:36:46 crc kubenswrapper[4718]: I0123 16:36:46.926723 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1b7b-account-create-update-lnclr" Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.128508 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-d9hbt"] Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.129967 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9hbt" Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.136344 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.139018 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/761ff9db-524f-41eb-9386-63e4d4338ba0-operator-scripts\") pod \"root-account-create-update-d9hbt\" (UID: \"761ff9db-524f-41eb-9386-63e4d4338ba0\") " pod="openstack/root-account-create-update-d9hbt" Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.139155 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls2mf\" (UniqueName: \"kubernetes.io/projected/761ff9db-524f-41eb-9386-63e4d4338ba0-kube-api-access-ls2mf\") pod \"root-account-create-update-d9hbt\" (UID: \"761ff9db-524f-41eb-9386-63e4d4338ba0\") " pod="openstack/root-account-create-update-d9hbt" Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.156336 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="498a729a-9294-46f5-a3c8-151bafe1f33c" path="/var/lib/kubelet/pods/498a729a-9294-46f5-a3c8-151bafe1f33c/volumes" Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.157132 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd3c510c-08f9-40af-9846-bb5e9964d30c" path="/var/lib/kubelet/pods/fd3c510c-08f9-40af-9846-bb5e9964d30c/volumes" Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.166646 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d9hbt"] Jan 23 16:36:47 crc kubenswrapper[4718]: W0123 16:36:47.220395 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc06a1f0_3166_4c08_bdef_e50e7ec1829f.slice/crio-2798a06f638c6c8587e20bb8a9c37bd68f8172a4bcbe2b4dc77ee4861fc1c4fc WatchSource:0}: Error finding container 2798a06f638c6c8587e20bb8a9c37bd68f8172a4bcbe2b4dc77ee4861fc1c4fc: Status 404 returned error can't find the container with id 2798a06f638c6c8587e20bb8a9c37bd68f8172a4bcbe2b4dc77ee4861fc1c4fc Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.225241 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-f79tn"] Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.241928 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/761ff9db-524f-41eb-9386-63e4d4338ba0-operator-scripts\") pod \"root-account-create-update-d9hbt\" (UID: \"761ff9db-524f-41eb-9386-63e4d4338ba0\") " pod="openstack/root-account-create-update-d9hbt" Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.242017 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls2mf\" (UniqueName: \"kubernetes.io/projected/761ff9db-524f-41eb-9386-63e4d4338ba0-kube-api-access-ls2mf\") pod \"root-account-create-update-d9hbt\" (UID: \"761ff9db-524f-41eb-9386-63e4d4338ba0\") " pod="openstack/root-account-create-update-d9hbt" Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.242710 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/761ff9db-524f-41eb-9386-63e4d4338ba0-operator-scripts\") pod \"root-account-create-update-d9hbt\" (UID: \"761ff9db-524f-41eb-9386-63e4d4338ba0\") " pod="openstack/root-account-create-update-d9hbt" Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.263044 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls2mf\" (UniqueName: \"kubernetes.io/projected/761ff9db-524f-41eb-9386-63e4d4338ba0-kube-api-access-ls2mf\") pod \"root-account-create-update-d9hbt\" (UID: \"761ff9db-524f-41eb-9386-63e4d4338ba0\") " pod="openstack/root-account-create-update-d9hbt" Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.465692 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9hbt" Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.509925 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-1b7b-account-create-update-lnclr"] Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.753473 4718 generic.go:334] "Generic (PLEG): container finished" podID="d8786710-6c66-4f87-9f8b-f0b1874c46e4" containerID="ebca100cee04ae7b5b5f80bca0cf7e7ade8c2a8de2c13f40eed9cf4f01e2967b" exitCode=0 Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.753563 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9rfg-config-wbvfp" event={"ID":"d8786710-6c66-4f87-9f8b-f0b1874c46e4","Type":"ContainerDied","Data":"ebca100cee04ae7b5b5f80bca0cf7e7ade8c2a8de2c13f40eed9cf4f01e2967b"} Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.765284 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1b7b-account-create-update-lnclr" event={"ID":"4a487cdb-2762-422f-bdb5-df30a8ce6f20","Type":"ContainerStarted","Data":"8193b7034baee647da399ead99b97a91786c3720a9d872dc39a7c190d0671631"} Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.768087 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-f79tn" event={"ID":"bc06a1f0-3166-4c08-bdef-e50e7ec1829f","Type":"ContainerStarted","Data":"d365f3670a0bc24b8f47450a46faab08a73303d28afb7ed5cba5fc02417e9d60"} Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.768130 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-f79tn" event={"ID":"bc06a1f0-3166-4c08-bdef-e50e7ec1829f","Type":"ContainerStarted","Data":"2798a06f638c6c8587e20bb8a9c37bd68f8172a4bcbe2b4dc77ee4861fc1c4fc"} Jan 23 16:36:47 crc kubenswrapper[4718]: I0123 16:36:47.978122 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d9hbt"] Jan 23 16:36:48 crc kubenswrapper[4718]: E0123 16:36:48.245869 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod119f9654_89f5_40ae_b93a_ddde420e1a51.slice/crio-403933d15cad4783d7cf89930ac90be7ab5a2bb3302192a4b1b313f7243fabe3\": RecentStats: unable to find data in memory cache]" Jan 23 16:36:48 crc kubenswrapper[4718]: E0123 16:36:48.247079 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod119f9654_89f5_40ae_b93a_ddde420e1a51.slice/crio-403933d15cad4783d7cf89930ac90be7ab5a2bb3302192a4b1b313f7243fabe3\": RecentStats: unable to find data in memory cache]" Jan 23 16:36:48 crc kubenswrapper[4718]: I0123 16:36:48.808302 4718 generic.go:334] "Generic (PLEG): container finished" podID="bc06a1f0-3166-4c08-bdef-e50e7ec1829f" containerID="d365f3670a0bc24b8f47450a46faab08a73303d28afb7ed5cba5fc02417e9d60" exitCode=0 Jan 23 16:36:48 crc kubenswrapper[4718]: I0123 16:36:48.809486 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-f79tn" event={"ID":"bc06a1f0-3166-4c08-bdef-e50e7ec1829f","Type":"ContainerDied","Data":"d365f3670a0bc24b8f47450a46faab08a73303d28afb7ed5cba5fc02417e9d60"} Jan 23 16:36:48 crc kubenswrapper[4718]: I0123 16:36:48.812079 4718 generic.go:334] "Generic (PLEG): container finished" podID="4a487cdb-2762-422f-bdb5-df30a8ce6f20" containerID="de9217333a38058157ba4402a01353a7b9513cddfb908b6ef47fce5ae04997d2" exitCode=0 Jan 23 16:36:48 crc kubenswrapper[4718]: I0123 16:36:48.812177 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1b7b-account-create-update-lnclr" event={"ID":"4a487cdb-2762-422f-bdb5-df30a8ce6f20","Type":"ContainerDied","Data":"de9217333a38058157ba4402a01353a7b9513cddfb908b6ef47fce5ae04997d2"} Jan 23 16:36:48 crc kubenswrapper[4718]: I0123 16:36:48.814202 4718 generic.go:334] "Generic (PLEG): container finished" podID="761ff9db-524f-41eb-9386-63e4d4338ba0" containerID="c5c885c944cb6731e48532193a4758e237470f3fee24eb33623c429f5c050009" exitCode=0 Jan 23 16:36:48 crc kubenswrapper[4718]: I0123 16:36:48.814279 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d9hbt" event={"ID":"761ff9db-524f-41eb-9386-63e4d4338ba0","Type":"ContainerDied","Data":"c5c885c944cb6731e48532193a4758e237470f3fee24eb33623c429f5c050009"} Jan 23 16:36:48 crc kubenswrapper[4718]: I0123 16:36:48.814349 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d9hbt" event={"ID":"761ff9db-524f-41eb-9386-63e4d4338ba0","Type":"ContainerStarted","Data":"8cfe83106adaa1cfc12609722f9ae8edf288d89bed58c5f65ccaf68cccf54dcd"} Jan 23 16:36:49 crc kubenswrapper[4718]: I0123 16:36:49.095548 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:49 crc kubenswrapper[4718]: I0123 16:36:49.110623 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3383bbd9-d755-435c-9d57-c66c5cadaf09-etc-swift\") pod \"swift-storage-0\" (UID: \"3383bbd9-d755-435c-9d57-c66c5cadaf09\") " pod="openstack/swift-storage-0" Jan 23 16:36:49 crc kubenswrapper[4718]: I0123 16:36:49.247560 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 23 16:36:50 crc kubenswrapper[4718]: I0123 16:36:50.834951 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 16:36:50 crc kubenswrapper[4718]: I0123 16:36:50.907275 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 23 16:36:50 crc kubenswrapper[4718]: I0123 16:36:50.989971 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 23 16:36:51 crc kubenswrapper[4718]: I0123 16:36:51.112844 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:36:52 crc kubenswrapper[4718]: I0123 16:36:52.433399 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:52 crc kubenswrapper[4718]: I0123 16:36:52.437781 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:52 crc kubenswrapper[4718]: I0123 16:36:52.870620 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:54 crc kubenswrapper[4718]: I0123 16:36:54.933263 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-l4rtr"] Jan 23 16:36:54 crc kubenswrapper[4718]: I0123 16:36:54.936335 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l4rtr" Jan 23 16:36:54 crc kubenswrapper[4718]: I0123 16:36:54.946000 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-l4rtr"] Jan 23 16:36:54 crc kubenswrapper[4718]: I0123 16:36:54.976717 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-9aa1-account-create-update-znt6h"] Jan 23 16:36:54 crc kubenswrapper[4718]: I0123 16:36:54.978975 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9aa1-account-create-update-znt6h" Jan 23 16:36:54 crc kubenswrapper[4718]: I0123 16:36:54.981450 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.001375 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9aa1-account-create-update-znt6h"] Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.038418 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-k9b4f"] Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.040253 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k9b4f" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.056081 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6dt7\" (UniqueName: \"kubernetes.io/projected/f3dc1ea4-e411-4a36-ace6-9c026462f7d0-kube-api-access-m6dt7\") pod \"barbican-9aa1-account-create-update-znt6h\" (UID: \"f3dc1ea4-e411-4a36-ace6-9c026462f7d0\") " pod="openstack/barbican-9aa1-account-create-update-znt6h" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.056277 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94nwb\" (UniqueName: \"kubernetes.io/projected/febec68a-b3c0-44fe-853f-7978c5012430-kube-api-access-94nwb\") pod \"barbican-db-create-l4rtr\" (UID: \"febec68a-b3c0-44fe-853f-7978c5012430\") " pod="openstack/barbican-db-create-l4rtr" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.056436 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3dc1ea4-e411-4a36-ace6-9c026462f7d0-operator-scripts\") pod \"barbican-9aa1-account-create-update-znt6h\" (UID: \"f3dc1ea4-e411-4a36-ace6-9c026462f7d0\") " pod="openstack/barbican-9aa1-account-create-update-znt6h" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.056649 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/febec68a-b3c0-44fe-853f-7978c5012430-operator-scripts\") pod \"barbican-db-create-l4rtr\" (UID: \"febec68a-b3c0-44fe-853f-7978c5012430\") " pod="openstack/barbican-db-create-l4rtr" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.059285 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-k9b4f"] Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.075271 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-86cf-account-create-update-rzlzf"] Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.078161 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-86cf-account-create-update-rzlzf" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.084588 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.099393 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-86cf-account-create-update-rzlzf"] Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.181712 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7twt\" (UniqueName: \"kubernetes.io/projected/e99e0f19-90ef-4836-a587-0e137f44c7bb-kube-api-access-s7twt\") pod \"cinder-db-create-k9b4f\" (UID: \"e99e0f19-90ef-4836-a587-0e137f44c7bb\") " pod="openstack/cinder-db-create-k9b4f" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.181857 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6dt7\" (UniqueName: \"kubernetes.io/projected/f3dc1ea4-e411-4a36-ace6-9c026462f7d0-kube-api-access-m6dt7\") pod \"barbican-9aa1-account-create-update-znt6h\" (UID: \"f3dc1ea4-e411-4a36-ace6-9c026462f7d0\") " pod="openstack/barbican-9aa1-account-create-update-znt6h" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.181982 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94nwb\" (UniqueName: \"kubernetes.io/projected/febec68a-b3c0-44fe-853f-7978c5012430-kube-api-access-94nwb\") pod \"barbican-db-create-l4rtr\" (UID: \"febec68a-b3c0-44fe-853f-7978c5012430\") " pod="openstack/barbican-db-create-l4rtr" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.182175 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3dc1ea4-e411-4a36-ace6-9c026462f7d0-operator-scripts\") pod \"barbican-9aa1-account-create-update-znt6h\" (UID: \"f3dc1ea4-e411-4a36-ace6-9c026462f7d0\") " pod="openstack/barbican-9aa1-account-create-update-znt6h" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.182435 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28234654-ae42-4f18-88ed-74e66f7b91a3-operator-scripts\") pod \"cinder-86cf-account-create-update-rzlzf\" (UID: \"28234654-ae42-4f18-88ed-74e66f7b91a3\") " pod="openstack/cinder-86cf-account-create-update-rzlzf" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.182489 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/febec68a-b3c0-44fe-853f-7978c5012430-operator-scripts\") pod \"barbican-db-create-l4rtr\" (UID: \"febec68a-b3c0-44fe-853f-7978c5012430\") " pod="openstack/barbican-db-create-l4rtr" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.182591 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk5k2\" (UniqueName: \"kubernetes.io/projected/28234654-ae42-4f18-88ed-74e66f7b91a3-kube-api-access-bk5k2\") pod \"cinder-86cf-account-create-update-rzlzf\" (UID: \"28234654-ae42-4f18-88ed-74e66f7b91a3\") " pod="openstack/cinder-86cf-account-create-update-rzlzf" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.182660 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e99e0f19-90ef-4836-a587-0e137f44c7bb-operator-scripts\") pod \"cinder-db-create-k9b4f\" (UID: \"e99e0f19-90ef-4836-a587-0e137f44c7bb\") " pod="openstack/cinder-db-create-k9b4f" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.184452 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3dc1ea4-e411-4a36-ace6-9c026462f7d0-operator-scripts\") pod \"barbican-9aa1-account-create-update-znt6h\" (UID: \"f3dc1ea4-e411-4a36-ace6-9c026462f7d0\") " pod="openstack/barbican-9aa1-account-create-update-znt6h" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.192173 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/febec68a-b3c0-44fe-853f-7978c5012430-operator-scripts\") pod \"barbican-db-create-l4rtr\" (UID: \"febec68a-b3c0-44fe-853f-7978c5012430\") " pod="openstack/barbican-db-create-l4rtr" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.229857 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94nwb\" (UniqueName: \"kubernetes.io/projected/febec68a-b3c0-44fe-853f-7978c5012430-kube-api-access-94nwb\") pod \"barbican-db-create-l4rtr\" (UID: \"febec68a-b3c0-44fe-853f-7978c5012430\") " pod="openstack/barbican-db-create-l4rtr" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.231521 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-5lntz"] Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.235511 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5lntz" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.237834 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6dt7\" (UniqueName: \"kubernetes.io/projected/f3dc1ea4-e411-4a36-ace6-9c026462f7d0-kube-api-access-m6dt7\") pod \"barbican-9aa1-account-create-update-znt6h\" (UID: \"f3dc1ea4-e411-4a36-ace6-9c026462f7d0\") " pod="openstack/barbican-9aa1-account-create-update-znt6h" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.255867 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-5lntz"] Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.264971 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-b6d3-account-create-update-pxk6l"] Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.266144 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l4rtr" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.267357 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b6d3-account-create-update-pxk6l" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.271545 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.275699 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-b6d3-account-create-update-pxk6l"] Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.285838 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk5k2\" (UniqueName: \"kubernetes.io/projected/28234654-ae42-4f18-88ed-74e66f7b91a3-kube-api-access-bk5k2\") pod \"cinder-86cf-account-create-update-rzlzf\" (UID: \"28234654-ae42-4f18-88ed-74e66f7b91a3\") " pod="openstack/cinder-86cf-account-create-update-rzlzf" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.285889 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e99e0f19-90ef-4836-a587-0e137f44c7bb-operator-scripts\") pod \"cinder-db-create-k9b4f\" (UID: \"e99e0f19-90ef-4836-a587-0e137f44c7bb\") " pod="openstack/cinder-db-create-k9b4f" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.286001 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7twt\" (UniqueName: \"kubernetes.io/projected/e99e0f19-90ef-4836-a587-0e137f44c7bb-kube-api-access-s7twt\") pod \"cinder-db-create-k9b4f\" (UID: \"e99e0f19-90ef-4836-a587-0e137f44c7bb\") " pod="openstack/cinder-db-create-k9b4f" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.286184 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28234654-ae42-4f18-88ed-74e66f7b91a3-operator-scripts\") pod \"cinder-86cf-account-create-update-rzlzf\" (UID: \"28234654-ae42-4f18-88ed-74e66f7b91a3\") " pod="openstack/cinder-86cf-account-create-update-rzlzf" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.288959 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28234654-ae42-4f18-88ed-74e66f7b91a3-operator-scripts\") pod \"cinder-86cf-account-create-update-rzlzf\" (UID: \"28234654-ae42-4f18-88ed-74e66f7b91a3\") " pod="openstack/cinder-86cf-account-create-update-rzlzf" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.294822 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e99e0f19-90ef-4836-a587-0e137f44c7bb-operator-scripts\") pod \"cinder-db-create-k9b4f\" (UID: \"e99e0f19-90ef-4836-a587-0e137f44c7bb\") " pod="openstack/cinder-db-create-k9b4f" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.317225 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk5k2\" (UniqueName: \"kubernetes.io/projected/28234654-ae42-4f18-88ed-74e66f7b91a3-kube-api-access-bk5k2\") pod \"cinder-86cf-account-create-update-rzlzf\" (UID: \"28234654-ae42-4f18-88ed-74e66f7b91a3\") " pod="openstack/cinder-86cf-account-create-update-rzlzf" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.326222 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7twt\" (UniqueName: \"kubernetes.io/projected/e99e0f19-90ef-4836-a587-0e137f44c7bb-kube-api-access-s7twt\") pod \"cinder-db-create-k9b4f\" (UID: \"e99e0f19-90ef-4836-a587-0e137f44c7bb\") " pod="openstack/cinder-db-create-k9b4f" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.334965 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-s2sfc"] Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.336408 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-s2sfc" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.338888 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.339238 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4m7cb" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.339421 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.339801 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 16:36:55 crc kubenswrapper[4718]: E0123 16:36:55.343077 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod119f9654_89f5_40ae_b93a_ddde420e1a51.slice/crio-403933d15cad4783d7cf89930ac90be7ab5a2bb3302192a4b1b313f7243fabe3\": RecentStats: unable to find data in memory cache]" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.343364 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-s2sfc"] Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.340126 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9aa1-account-create-update-znt6h" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.376744 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k9b4f" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.387800 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q2fq\" (UniqueName: \"kubernetes.io/projected/0c2206d1-ef66-450d-a83a-9c7c17b8e96d-kube-api-access-8q2fq\") pod \"heat-b6d3-account-create-update-pxk6l\" (UID: \"0c2206d1-ef66-450d-a83a-9c7c17b8e96d\") " pod="openstack/heat-b6d3-account-create-update-pxk6l" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.387946 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c2206d1-ef66-450d-a83a-9c7c17b8e96d-operator-scripts\") pod \"heat-b6d3-account-create-update-pxk6l\" (UID: \"0c2206d1-ef66-450d-a83a-9c7c17b8e96d\") " pod="openstack/heat-b6d3-account-create-update-pxk6l" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.388010 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbf2v\" (UniqueName: \"kubernetes.io/projected/e891ff28-e439-4bee-8938-4f148f0b734d-kube-api-access-bbf2v\") pod \"heat-db-create-5lntz\" (UID: \"e891ff28-e439-4bee-8938-4f148f0b734d\") " pod="openstack/heat-db-create-5lntz" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.388062 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e891ff28-e439-4bee-8938-4f148f0b734d-operator-scripts\") pod \"heat-db-create-5lntz\" (UID: \"e891ff28-e439-4bee-8938-4f148f0b734d\") " pod="openstack/heat-db-create-5lntz" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.408437 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-86cf-account-create-update-rzlzf" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.436409 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-5lj86"] Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.438379 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5lj86" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.459615 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5lj86"] Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.489995 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c2206d1-ef66-450d-a83a-9c7c17b8e96d-operator-scripts\") pod \"heat-b6d3-account-create-update-pxk6l\" (UID: \"0c2206d1-ef66-450d-a83a-9c7c17b8e96d\") " pod="openstack/heat-b6d3-account-create-update-pxk6l" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.490090 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8-config-data\") pod \"keystone-db-sync-s2sfc\" (UID: \"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8\") " pod="openstack/keystone-db-sync-s2sfc" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.490166 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6s97\" (UniqueName: \"kubernetes.io/projected/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8-kube-api-access-l6s97\") pod \"keystone-db-sync-s2sfc\" (UID: \"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8\") " pod="openstack/keystone-db-sync-s2sfc" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.490198 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbf2v\" (UniqueName: \"kubernetes.io/projected/e891ff28-e439-4bee-8938-4f148f0b734d-kube-api-access-bbf2v\") pod \"heat-db-create-5lntz\" (UID: \"e891ff28-e439-4bee-8938-4f148f0b734d\") " pod="openstack/heat-db-create-5lntz" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.490329 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e891ff28-e439-4bee-8938-4f148f0b734d-operator-scripts\") pod \"heat-db-create-5lntz\" (UID: \"e891ff28-e439-4bee-8938-4f148f0b734d\") " pod="openstack/heat-db-create-5lntz" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.490358 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q2fq\" (UniqueName: \"kubernetes.io/projected/0c2206d1-ef66-450d-a83a-9c7c17b8e96d-kube-api-access-8q2fq\") pod \"heat-b6d3-account-create-update-pxk6l\" (UID: \"0c2206d1-ef66-450d-a83a-9c7c17b8e96d\") " pod="openstack/heat-b6d3-account-create-update-pxk6l" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.490377 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8-combined-ca-bundle\") pod \"keystone-db-sync-s2sfc\" (UID: \"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8\") " pod="openstack/keystone-db-sync-s2sfc" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.491196 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e891ff28-e439-4bee-8938-4f148f0b734d-operator-scripts\") pod \"heat-db-create-5lntz\" (UID: \"e891ff28-e439-4bee-8938-4f148f0b734d\") " pod="openstack/heat-db-create-5lntz" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.491249 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c2206d1-ef66-450d-a83a-9c7c17b8e96d-operator-scripts\") pod \"heat-b6d3-account-create-update-pxk6l\" (UID: \"0c2206d1-ef66-450d-a83a-9c7c17b8e96d\") " pod="openstack/heat-b6d3-account-create-update-pxk6l" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.519986 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q2fq\" (UniqueName: \"kubernetes.io/projected/0c2206d1-ef66-450d-a83a-9c7c17b8e96d-kube-api-access-8q2fq\") pod \"heat-b6d3-account-create-update-pxk6l\" (UID: \"0c2206d1-ef66-450d-a83a-9c7c17b8e96d\") " pod="openstack/heat-b6d3-account-create-update-pxk6l" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.527710 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbf2v\" (UniqueName: \"kubernetes.io/projected/e891ff28-e439-4bee-8938-4f148f0b734d-kube-api-access-bbf2v\") pod \"heat-db-create-5lntz\" (UID: \"e891ff28-e439-4bee-8938-4f148f0b734d\") " pod="openstack/heat-db-create-5lntz" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.537376 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b5d4-account-create-update-zvckr"] Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.540505 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b5d4-account-create-update-zvckr" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.545998 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.565491 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b5d4-account-create-update-zvckr"] Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.594168 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8-config-data\") pod \"keystone-db-sync-s2sfc\" (UID: \"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8\") " pod="openstack/keystone-db-sync-s2sfc" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.594225 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6s97\" (UniqueName: \"kubernetes.io/projected/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8-kube-api-access-l6s97\") pod \"keystone-db-sync-s2sfc\" (UID: \"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8\") " pod="openstack/keystone-db-sync-s2sfc" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.594260 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f60062d-a297-484e-a230-41a8c9f8e5e4-operator-scripts\") pod \"neutron-db-create-5lj86\" (UID: \"2f60062d-a297-484e-a230-41a8c9f8e5e4\") " pod="openstack/neutron-db-create-5lj86" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.594323 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s67gp\" (UniqueName: \"kubernetes.io/projected/2f60062d-a297-484e-a230-41a8c9f8e5e4-kube-api-access-s67gp\") pod \"neutron-db-create-5lj86\" (UID: \"2f60062d-a297-484e-a230-41a8c9f8e5e4\") " pod="openstack/neutron-db-create-5lj86" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.594344 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8-combined-ca-bundle\") pod \"keystone-db-sync-s2sfc\" (UID: \"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8\") " pod="openstack/keystone-db-sync-s2sfc" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.601521 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8-combined-ca-bundle\") pod \"keystone-db-sync-s2sfc\" (UID: \"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8\") " pod="openstack/keystone-db-sync-s2sfc" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.603595 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8-config-data\") pod \"keystone-db-sync-s2sfc\" (UID: \"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8\") " pod="openstack/keystone-db-sync-s2sfc" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.612208 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6s97\" (UniqueName: \"kubernetes.io/projected/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8-kube-api-access-l6s97\") pod \"keystone-db-sync-s2sfc\" (UID: \"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8\") " pod="openstack/keystone-db-sync-s2sfc" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.619871 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.620181 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerName="prometheus" containerID="cri-o://84db3a5916a27ae74bb4bcf9237a619a6327726a16eb4d4ddd9aa2a66c198133" gracePeriod=600 Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.620320 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerName="thanos-sidecar" containerID="cri-o://27cf948be2d3073f63b16d346d2fa5cfb356943b5526c8b9aa8a69a7ece6b11b" gracePeriod=600 Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.620336 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerName="config-reloader" containerID="cri-o://d6493d1966a9e258cb3a251aad8359e9a968e642581d2453e5c83eb60e975ea1" gracePeriod=600 Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.695989 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs2ls\" (UniqueName: \"kubernetes.io/projected/b87063a9-a1b8-4120-9770-f939e3e16a7b-kube-api-access-hs2ls\") pod \"neutron-b5d4-account-create-update-zvckr\" (UID: \"b87063a9-a1b8-4120-9770-f939e3e16a7b\") " pod="openstack/neutron-b5d4-account-create-update-zvckr" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.696079 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b87063a9-a1b8-4120-9770-f939e3e16a7b-operator-scripts\") pod \"neutron-b5d4-account-create-update-zvckr\" (UID: \"b87063a9-a1b8-4120-9770-f939e3e16a7b\") " pod="openstack/neutron-b5d4-account-create-update-zvckr" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.696204 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s67gp\" (UniqueName: \"kubernetes.io/projected/2f60062d-a297-484e-a230-41a8c9f8e5e4-kube-api-access-s67gp\") pod \"neutron-db-create-5lj86\" (UID: \"2f60062d-a297-484e-a230-41a8c9f8e5e4\") " pod="openstack/neutron-db-create-5lj86" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.696363 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f60062d-a297-484e-a230-41a8c9f8e5e4-operator-scripts\") pod \"neutron-db-create-5lj86\" (UID: \"2f60062d-a297-484e-a230-41a8c9f8e5e4\") " pod="openstack/neutron-db-create-5lj86" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.697113 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f60062d-a297-484e-a230-41a8c9f8e5e4-operator-scripts\") pod \"neutron-db-create-5lj86\" (UID: \"2f60062d-a297-484e-a230-41a8c9f8e5e4\") " pod="openstack/neutron-db-create-5lj86" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.712960 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5lntz" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.714002 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s67gp\" (UniqueName: \"kubernetes.io/projected/2f60062d-a297-484e-a230-41a8c9f8e5e4-kube-api-access-s67gp\") pod \"neutron-db-create-5lj86\" (UID: \"2f60062d-a297-484e-a230-41a8c9f8e5e4\") " pod="openstack/neutron-db-create-5lj86" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.728161 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b6d3-account-create-update-pxk6l" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.747706 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-s2sfc" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.773033 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5lj86" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.798804 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs2ls\" (UniqueName: \"kubernetes.io/projected/b87063a9-a1b8-4120-9770-f939e3e16a7b-kube-api-access-hs2ls\") pod \"neutron-b5d4-account-create-update-zvckr\" (UID: \"b87063a9-a1b8-4120-9770-f939e3e16a7b\") " pod="openstack/neutron-b5d4-account-create-update-zvckr" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.798869 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b87063a9-a1b8-4120-9770-f939e3e16a7b-operator-scripts\") pod \"neutron-b5d4-account-create-update-zvckr\" (UID: \"b87063a9-a1b8-4120-9770-f939e3e16a7b\") " pod="openstack/neutron-b5d4-account-create-update-zvckr" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.799715 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b87063a9-a1b8-4120-9770-f939e3e16a7b-operator-scripts\") pod \"neutron-b5d4-account-create-update-zvckr\" (UID: \"b87063a9-a1b8-4120-9770-f939e3e16a7b\") " pod="openstack/neutron-b5d4-account-create-update-zvckr" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.816907 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs2ls\" (UniqueName: \"kubernetes.io/projected/b87063a9-a1b8-4120-9770-f939e3e16a7b-kube-api-access-hs2ls\") pod \"neutron-b5d4-account-create-update-zvckr\" (UID: \"b87063a9-a1b8-4120-9770-f939e3e16a7b\") " pod="openstack/neutron-b5d4-account-create-update-zvckr" Jan 23 16:36:55 crc kubenswrapper[4718]: I0123 16:36:55.919312 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b5d4-account-create-update-zvckr" Jan 23 16:36:56 crc kubenswrapper[4718]: E0123 16:36:56.086838 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod119f9654_89f5_40ae_b93a_ddde420e1a51.slice/crio-403933d15cad4783d7cf89930ac90be7ab5a2bb3302192a4b1b313f7243fabe3\": RecentStats: unable to find data in memory cache]" Jan 23 16:36:56 crc kubenswrapper[4718]: I0123 16:36:56.920130 4718 generic.go:334] "Generic (PLEG): container finished" podID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerID="27cf948be2d3073f63b16d346d2fa5cfb356943b5526c8b9aa8a69a7ece6b11b" exitCode=0 Jan 23 16:36:56 crc kubenswrapper[4718]: I0123 16:36:56.920992 4718 generic.go:334] "Generic (PLEG): container finished" podID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerID="d6493d1966a9e258cb3a251aad8359e9a968e642581d2453e5c83eb60e975ea1" exitCode=0 Jan 23 16:36:56 crc kubenswrapper[4718]: I0123 16:36:56.921042 4718 generic.go:334] "Generic (PLEG): container finished" podID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerID="84db3a5916a27ae74bb4bcf9237a619a6327726a16eb4d4ddd9aa2a66c198133" exitCode=0 Jan 23 16:36:56 crc kubenswrapper[4718]: I0123 16:36:56.920256 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"148cf8fd-f7c0-4355-a54b-ff8052293bc4","Type":"ContainerDied","Data":"27cf948be2d3073f63b16d346d2fa5cfb356943b5526c8b9aa8a69a7ece6b11b"} Jan 23 16:36:56 crc kubenswrapper[4718]: I0123 16:36:56.921099 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"148cf8fd-f7c0-4355-a54b-ff8052293bc4","Type":"ContainerDied","Data":"d6493d1966a9e258cb3a251aad8359e9a968e642581d2453e5c83eb60e975ea1"} Jan 23 16:36:56 crc kubenswrapper[4718]: I0123 16:36:56.921121 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"148cf8fd-f7c0-4355-a54b-ff8052293bc4","Type":"ContainerDied","Data":"84db3a5916a27ae74bb4bcf9237a619a6327726a16eb4d4ddd9aa2a66c198133"} Jan 23 16:36:57 crc kubenswrapper[4718]: I0123 16:36:57.434031 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.139:9090/-/ready\": dial tcp 10.217.0.139:9090: connect: connection refused" Jan 23 16:36:58 crc kubenswrapper[4718]: E0123 16:36:58.121086 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 23 16:36:58 crc kubenswrapper[4718]: E0123 16:36:58.122465 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7pltf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-cl7vm_openstack(fdefc9e4-c27d-4b53-a75a-5a74124d31f2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:36:58 crc kubenswrapper[4718]: E0123 16:36:58.123849 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-cl7vm" podUID="fdefc9e4-c27d-4b53-a75a-5a74124d31f2" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.358974 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-f79tn" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.367898 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1b7b-account-create-update-lnclr" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.373241 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.408193 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9hbt" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.461359 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7qsh\" (UniqueName: \"kubernetes.io/projected/d8786710-6c66-4f87-9f8b-f0b1874c46e4-kube-api-access-b7qsh\") pod \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.461410 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d8786710-6c66-4f87-9f8b-f0b1874c46e4-additional-scripts\") pod \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.461533 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8786710-6c66-4f87-9f8b-f0b1874c46e4-var-run-ovn\") pod \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.461666 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsl5l\" (UniqueName: \"kubernetes.io/projected/4a487cdb-2762-422f-bdb5-df30a8ce6f20-kube-api-access-fsl5l\") pod \"4a487cdb-2762-422f-bdb5-df30a8ce6f20\" (UID: \"4a487cdb-2762-422f-bdb5-df30a8ce6f20\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.461724 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls2mf\" (UniqueName: \"kubernetes.io/projected/761ff9db-524f-41eb-9386-63e4d4338ba0-kube-api-access-ls2mf\") pod \"761ff9db-524f-41eb-9386-63e4d4338ba0\" (UID: \"761ff9db-524f-41eb-9386-63e4d4338ba0\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.461747 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc06a1f0-3166-4c08-bdef-e50e7ec1829f-operator-scripts\") pod \"bc06a1f0-3166-4c08-bdef-e50e7ec1829f\" (UID: \"bc06a1f0-3166-4c08-bdef-e50e7ec1829f\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.461801 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d8786710-6c66-4f87-9f8b-f0b1874c46e4-var-log-ovn\") pod \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.461830 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8786710-6c66-4f87-9f8b-f0b1874c46e4-scripts\") pod \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.461887 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6wkj\" (UniqueName: \"kubernetes.io/projected/bc06a1f0-3166-4c08-bdef-e50e7ec1829f-kube-api-access-s6wkj\") pod \"bc06a1f0-3166-4c08-bdef-e50e7ec1829f\" (UID: \"bc06a1f0-3166-4c08-bdef-e50e7ec1829f\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.461881 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8786710-6c66-4f87-9f8b-f0b1874c46e4-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d8786710-6c66-4f87-9f8b-f0b1874c46e4" (UID: "d8786710-6c66-4f87-9f8b-f0b1874c46e4"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.461911 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a487cdb-2762-422f-bdb5-df30a8ce6f20-operator-scripts\") pod \"4a487cdb-2762-422f-bdb5-df30a8ce6f20\" (UID: \"4a487cdb-2762-422f-bdb5-df30a8ce6f20\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.461930 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d8786710-6c66-4f87-9f8b-f0b1874c46e4-var-run\") pod \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\" (UID: \"d8786710-6c66-4f87-9f8b-f0b1874c46e4\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.461963 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/761ff9db-524f-41eb-9386-63e4d4338ba0-operator-scripts\") pod \"761ff9db-524f-41eb-9386-63e4d4338ba0\" (UID: \"761ff9db-524f-41eb-9386-63e4d4338ba0\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.462418 4718 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8786710-6c66-4f87-9f8b-f0b1874c46e4-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.463381 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/761ff9db-524f-41eb-9386-63e4d4338ba0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "761ff9db-524f-41eb-9386-63e4d4338ba0" (UID: "761ff9db-524f-41eb-9386-63e4d4338ba0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.463964 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8786710-6c66-4f87-9f8b-f0b1874c46e4-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d8786710-6c66-4f87-9f8b-f0b1874c46e4" (UID: "d8786710-6c66-4f87-9f8b-f0b1874c46e4"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.465732 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8786710-6c66-4f87-9f8b-f0b1874c46e4-scripts" (OuterVolumeSpecName: "scripts") pod "d8786710-6c66-4f87-9f8b-f0b1874c46e4" (UID: "d8786710-6c66-4f87-9f8b-f0b1874c46e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.471242 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a487cdb-2762-422f-bdb5-df30a8ce6f20-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a487cdb-2762-422f-bdb5-df30a8ce6f20" (UID: "4a487cdb-2762-422f-bdb5-df30a8ce6f20"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.471363 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8786710-6c66-4f87-9f8b-f0b1874c46e4-var-run" (OuterVolumeSpecName: "var-run") pod "d8786710-6c66-4f87-9f8b-f0b1874c46e4" (UID: "d8786710-6c66-4f87-9f8b-f0b1874c46e4"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.471310 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc06a1f0-3166-4c08-bdef-e50e7ec1829f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc06a1f0-3166-4c08-bdef-e50e7ec1829f" (UID: "bc06a1f0-3166-4c08-bdef-e50e7ec1829f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.471482 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8786710-6c66-4f87-9f8b-f0b1874c46e4-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d8786710-6c66-4f87-9f8b-f0b1874c46e4" (UID: "d8786710-6c66-4f87-9f8b-f0b1874c46e4"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.472339 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8786710-6c66-4f87-9f8b-f0b1874c46e4-kube-api-access-b7qsh" (OuterVolumeSpecName: "kube-api-access-b7qsh") pod "d8786710-6c66-4f87-9f8b-f0b1874c46e4" (UID: "d8786710-6c66-4f87-9f8b-f0b1874c46e4"). InnerVolumeSpecName "kube-api-access-b7qsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.473053 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/761ff9db-524f-41eb-9386-63e4d4338ba0-kube-api-access-ls2mf" (OuterVolumeSpecName: "kube-api-access-ls2mf") pod "761ff9db-524f-41eb-9386-63e4d4338ba0" (UID: "761ff9db-524f-41eb-9386-63e4d4338ba0"). InnerVolumeSpecName "kube-api-access-ls2mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.476917 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a487cdb-2762-422f-bdb5-df30a8ce6f20-kube-api-access-fsl5l" (OuterVolumeSpecName: "kube-api-access-fsl5l") pod "4a487cdb-2762-422f-bdb5-df30a8ce6f20" (UID: "4a487cdb-2762-422f-bdb5-df30a8ce6f20"). InnerVolumeSpecName "kube-api-access-fsl5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.481568 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc06a1f0-3166-4c08-bdef-e50e7ec1829f-kube-api-access-s6wkj" (OuterVolumeSpecName: "kube-api-access-s6wkj") pod "bc06a1f0-3166-4c08-bdef-e50e7ec1829f" (UID: "bc06a1f0-3166-4c08-bdef-e50e7ec1829f"). InnerVolumeSpecName "kube-api-access-s6wkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.567451 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6wkj\" (UniqueName: \"kubernetes.io/projected/bc06a1f0-3166-4c08-bdef-e50e7ec1829f-kube-api-access-s6wkj\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.567727 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a487cdb-2762-422f-bdb5-df30a8ce6f20-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.567791 4718 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d8786710-6c66-4f87-9f8b-f0b1874c46e4-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.567845 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/761ff9db-524f-41eb-9386-63e4d4338ba0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.567906 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7qsh\" (UniqueName: \"kubernetes.io/projected/d8786710-6c66-4f87-9f8b-f0b1874c46e4-kube-api-access-b7qsh\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.567961 4718 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d8786710-6c66-4f87-9f8b-f0b1874c46e4-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.568016 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsl5l\" (UniqueName: \"kubernetes.io/projected/4a487cdb-2762-422f-bdb5-df30a8ce6f20-kube-api-access-fsl5l\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.568068 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls2mf\" (UniqueName: \"kubernetes.io/projected/761ff9db-524f-41eb-9386-63e4d4338ba0-kube-api-access-ls2mf\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.568120 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc06a1f0-3166-4c08-bdef-e50e7ec1829f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.568169 4718 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d8786710-6c66-4f87-9f8b-f0b1874c46e4-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.568219 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8786710-6c66-4f87-9f8b-f0b1874c46e4-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.666280 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.774649 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/148cf8fd-f7c0-4355-a54b-ff8052293bc4-config\") pod \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.774728 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77qz9\" (UniqueName: \"kubernetes.io/projected/148cf8fd-f7c0-4355-a54b-ff8052293bc4-kube-api-access-77qz9\") pod \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.774803 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/148cf8fd-f7c0-4355-a54b-ff8052293bc4-tls-assets\") pod \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.774896 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/148cf8fd-f7c0-4355-a54b-ff8052293bc4-config-out\") pod \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.774924 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/148cf8fd-f7c0-4355-a54b-ff8052293bc4-prometheus-metric-storage-rulefiles-2\") pod \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.775156 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\") pod \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.775357 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/148cf8fd-f7c0-4355-a54b-ff8052293bc4-thanos-prometheus-http-client-file\") pod \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.775396 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/148cf8fd-f7c0-4355-a54b-ff8052293bc4-web-config\") pod \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.775428 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/148cf8fd-f7c0-4355-a54b-ff8052293bc4-prometheus-metric-storage-rulefiles-0\") pod \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.775501 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/148cf8fd-f7c0-4355-a54b-ff8052293bc4-prometheus-metric-storage-rulefiles-1\") pod \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\" (UID: \"148cf8fd-f7c0-4355-a54b-ff8052293bc4\") " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.778103 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/148cf8fd-f7c0-4355-a54b-ff8052293bc4-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "148cf8fd-f7c0-4355-a54b-ff8052293bc4" (UID: "148cf8fd-f7c0-4355-a54b-ff8052293bc4"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.779255 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/148cf8fd-f7c0-4355-a54b-ff8052293bc4-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "148cf8fd-f7c0-4355-a54b-ff8052293bc4" (UID: "148cf8fd-f7c0-4355-a54b-ff8052293bc4"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.785261 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/148cf8fd-f7c0-4355-a54b-ff8052293bc4-config-out" (OuterVolumeSpecName: "config-out") pod "148cf8fd-f7c0-4355-a54b-ff8052293bc4" (UID: "148cf8fd-f7c0-4355-a54b-ff8052293bc4"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.787153 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/148cf8fd-f7c0-4355-a54b-ff8052293bc4-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "148cf8fd-f7c0-4355-a54b-ff8052293bc4" (UID: "148cf8fd-f7c0-4355-a54b-ff8052293bc4"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.788263 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/148cf8fd-f7c0-4355-a54b-ff8052293bc4-kube-api-access-77qz9" (OuterVolumeSpecName: "kube-api-access-77qz9") pod "148cf8fd-f7c0-4355-a54b-ff8052293bc4" (UID: "148cf8fd-f7c0-4355-a54b-ff8052293bc4"). InnerVolumeSpecName "kube-api-access-77qz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.789003 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/148cf8fd-f7c0-4355-a54b-ff8052293bc4-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "148cf8fd-f7c0-4355-a54b-ff8052293bc4" (UID: "148cf8fd-f7c0-4355-a54b-ff8052293bc4"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.789444 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/148cf8fd-f7c0-4355-a54b-ff8052293bc4-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "148cf8fd-f7c0-4355-a54b-ff8052293bc4" (UID: "148cf8fd-f7c0-4355-a54b-ff8052293bc4"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.791620 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/148cf8fd-f7c0-4355-a54b-ff8052293bc4-config" (OuterVolumeSpecName: "config") pod "148cf8fd-f7c0-4355-a54b-ff8052293bc4" (UID: "148cf8fd-f7c0-4355-a54b-ff8052293bc4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.814359 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "148cf8fd-f7c0-4355-a54b-ff8052293bc4" (UID: "148cf8fd-f7c0-4355-a54b-ff8052293bc4"). InnerVolumeSpecName "pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.852746 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/148cf8fd-f7c0-4355-a54b-ff8052293bc4-web-config" (OuterVolumeSpecName: "web-config") pod "148cf8fd-f7c0-4355-a54b-ff8052293bc4" (UID: "148cf8fd-f7c0-4355-a54b-ff8052293bc4"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.875423 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.875492 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.877478 4718 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/148cf8fd-f7c0-4355-a54b-ff8052293bc4-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.877502 4718 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/148cf8fd-f7c0-4355-a54b-ff8052293bc4-web-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.877513 4718 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/148cf8fd-f7c0-4355-a54b-ff8052293bc4-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.877525 4718 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/148cf8fd-f7c0-4355-a54b-ff8052293bc4-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.877536 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/148cf8fd-f7c0-4355-a54b-ff8052293bc4-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.877573 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77qz9\" (UniqueName: \"kubernetes.io/projected/148cf8fd-f7c0-4355-a54b-ff8052293bc4-kube-api-access-77qz9\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.877583 4718 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/148cf8fd-f7c0-4355-a54b-ff8052293bc4-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.877591 4718 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/148cf8fd-f7c0-4355-a54b-ff8052293bc4-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.877600 4718 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/148cf8fd-f7c0-4355-a54b-ff8052293bc4-config-out\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.877667 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\") on node \"crc\" " Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.897554 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5lj86"] Jan 23 16:36:58 crc kubenswrapper[4718]: W0123 16:36:58.897850 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f60062d_a297_484e_a230_41a8c9f8e5e4.slice/crio-9f3bbdf0d2195dea2c899493572e742d0f53c81f69b3cf34000620d3c58f6fc0 WatchSource:0}: Error finding container 9f3bbdf0d2195dea2c899493572e742d0f53c81f69b3cf34000620d3c58f6fc0: Status 404 returned error can't find the container with id 9f3bbdf0d2195dea2c899493572e742d0f53c81f69b3cf34000620d3c58f6fc0 Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.913570 4718 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.913789 4718 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf") on node "crc" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.946724 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9rfg-config-wbvfp" event={"ID":"d8786710-6c66-4f87-9f8b-f0b1874c46e4","Type":"ContainerDied","Data":"651a8638af070b8ed037d4a7b6add8c72c9aed2f8783e9750052685c3c84d455"} Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.946772 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="651a8638af070b8ed037d4a7b6add8c72c9aed2f8783e9750052685c3c84d455" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.946839 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9rfg-config-wbvfp" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.953787 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"148cf8fd-f7c0-4355-a54b-ff8052293bc4","Type":"ContainerDied","Data":"5a75377ab27caa7f696d293f602be2f5e070a6d5d85f9fe290c3cbe4b1b1a72f"} Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.953845 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.953869 4718 scope.go:117] "RemoveContainer" containerID="27cf948be2d3073f63b16d346d2fa5cfb356943b5526c8b9aa8a69a7ece6b11b" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.960690 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5lj86" event={"ID":"2f60062d-a297-484e-a230-41a8c9f8e5e4","Type":"ContainerStarted","Data":"9f3bbdf0d2195dea2c899493572e742d0f53c81f69b3cf34000620d3c58f6fc0"} Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.966818 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1b7b-account-create-update-lnclr" event={"ID":"4a487cdb-2762-422f-bdb5-df30a8ce6f20","Type":"ContainerDied","Data":"8193b7034baee647da399ead99b97a91786c3720a9d872dc39a7c190d0671631"} Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.966864 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8193b7034baee647da399ead99b97a91786c3720a9d872dc39a7c190d0671631" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.966942 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1b7b-account-create-update-lnclr" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.974811 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-f79tn" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.974806 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-f79tn" event={"ID":"bc06a1f0-3166-4c08-bdef-e50e7ec1829f","Type":"ContainerDied","Data":"2798a06f638c6c8587e20bb8a9c37bd68f8172a4bcbe2b4dc77ee4861fc1c4fc"} Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.974955 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2798a06f638c6c8587e20bb8a9c37bd68f8172a4bcbe2b4dc77ee4861fc1c4fc" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.976374 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9hbt" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.976360 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d9hbt" event={"ID":"761ff9db-524f-41eb-9386-63e4d4338ba0","Type":"ContainerDied","Data":"8cfe83106adaa1cfc12609722f9ae8edf288d89bed58c5f65ccaf68cccf54dcd"} Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.976559 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cfe83106adaa1cfc12609722f9ae8edf288d89bed58c5f65ccaf68cccf54dcd" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.979609 4718 reconciler_common.go:293] "Volume detached for volume \"pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\") on node \"crc\" DevicePath \"\"" Jan 23 16:36:58 crc kubenswrapper[4718]: I0123 16:36:58.986384 4718 scope.go:117] "RemoveContainer" containerID="d6493d1966a9e258cb3a251aad8359e9a968e642581d2453e5c83eb60e975ea1" Jan 23 16:36:58 crc kubenswrapper[4718]: E0123 16:36:58.986462 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-cl7vm" podUID="fdefc9e4-c27d-4b53-a75a-5a74124d31f2" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.017752 4718 scope.go:117] "RemoveContainer" containerID="84db3a5916a27ae74bb4bcf9237a619a6327726a16eb4d4ddd9aa2a66c198133" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.079956 4718 scope.go:117] "RemoveContainer" containerID="62f23b39e112ce7efcce20f00cec1b8b83fbf3ccbf4e593270225b1ca91c68db" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.097753 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.116903 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.160743 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" path="/var/lib/kubelet/pods/148cf8fd-f7c0-4355-a54b-ff8052293bc4/volumes" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.161623 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 16:36:59 crc kubenswrapper[4718]: E0123 16:36:59.162068 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerName="init-config-reloader" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.162080 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerName="init-config-reloader" Jan 23 16:36:59 crc kubenswrapper[4718]: E0123 16:36:59.162093 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc06a1f0-3166-4c08-bdef-e50e7ec1829f" containerName="mariadb-database-create" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.162099 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc06a1f0-3166-4c08-bdef-e50e7ec1829f" containerName="mariadb-database-create" Jan 23 16:36:59 crc kubenswrapper[4718]: E0123 16:36:59.162112 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761ff9db-524f-41eb-9386-63e4d4338ba0" containerName="mariadb-account-create-update" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.162118 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="761ff9db-524f-41eb-9386-63e4d4338ba0" containerName="mariadb-account-create-update" Jan 23 16:36:59 crc kubenswrapper[4718]: E0123 16:36:59.162136 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a487cdb-2762-422f-bdb5-df30a8ce6f20" containerName="mariadb-account-create-update" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.162142 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a487cdb-2762-422f-bdb5-df30a8ce6f20" containerName="mariadb-account-create-update" Jan 23 16:36:59 crc kubenswrapper[4718]: E0123 16:36:59.162153 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8786710-6c66-4f87-9f8b-f0b1874c46e4" containerName="ovn-config" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.162158 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8786710-6c66-4f87-9f8b-f0b1874c46e4" containerName="ovn-config" Jan 23 16:36:59 crc kubenswrapper[4718]: E0123 16:36:59.162172 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerName="prometheus" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.162177 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerName="prometheus" Jan 23 16:36:59 crc kubenswrapper[4718]: E0123 16:36:59.162195 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerName="thanos-sidecar" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.162201 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerName="thanos-sidecar" Jan 23 16:36:59 crc kubenswrapper[4718]: E0123 16:36:59.162223 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerName="config-reloader" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.162229 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerName="config-reloader" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.162428 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerName="thanos-sidecar" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.162440 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc06a1f0-3166-4c08-bdef-e50e7ec1829f" containerName="mariadb-database-create" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.162458 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerName="config-reloader" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.162470 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8786710-6c66-4f87-9f8b-f0b1874c46e4" containerName="ovn-config" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.162484 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="761ff9db-524f-41eb-9386-63e4d4338ba0" containerName="mariadb-account-create-update" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.162494 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="148cf8fd-f7c0-4355-a54b-ff8052293bc4" containerName="prometheus" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.162504 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a487cdb-2762-422f-bdb5-df30a8ce6f20" containerName="mariadb-account-create-update" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.165294 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.172286 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.172494 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.175980 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.176249 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.176584 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.177086 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.178244 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.176978 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-xl2lt" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.179467 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.182971 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.188143 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.188194 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm7ps\" (UniqueName: \"kubernetes.io/projected/9a386a29-4af1-4f01-ac73-771210f5a97f-kube-api-access-zm7ps\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.188251 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9a386a29-4af1-4f01-ac73-771210f5a97f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.188272 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9a386a29-4af1-4f01-ac73-771210f5a97f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.188319 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9a386a29-4af1-4f01-ac73-771210f5a97f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.188353 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9a386a29-4af1-4f01-ac73-771210f5a97f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.188378 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9a386a29-4af1-4f01-ac73-771210f5a97f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.188417 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a386a29-4af1-4f01-ac73-771210f5a97f-config\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.188442 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9a386a29-4af1-4f01-ac73-771210f5a97f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.188468 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a386a29-4af1-4f01-ac73-771210f5a97f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.188532 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9a386a29-4af1-4f01-ac73-771210f5a97f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.188556 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9a386a29-4af1-4f01-ac73-771210f5a97f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.188609 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9a386a29-4af1-4f01-ac73-771210f5a97f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.290120 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9a386a29-4af1-4f01-ac73-771210f5a97f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.290206 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9a386a29-4af1-4f01-ac73-771210f5a97f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.290235 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9a386a29-4af1-4f01-ac73-771210f5a97f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.290267 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a386a29-4af1-4f01-ac73-771210f5a97f-config\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.290295 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9a386a29-4af1-4f01-ac73-771210f5a97f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.290319 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a386a29-4af1-4f01-ac73-771210f5a97f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.290375 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9a386a29-4af1-4f01-ac73-771210f5a97f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.290403 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9a386a29-4af1-4f01-ac73-771210f5a97f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.290429 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9a386a29-4af1-4f01-ac73-771210f5a97f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.290471 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.290495 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm7ps\" (UniqueName: \"kubernetes.io/projected/9a386a29-4af1-4f01-ac73-771210f5a97f-kube-api-access-zm7ps\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.290542 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9a386a29-4af1-4f01-ac73-771210f5a97f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.290561 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9a386a29-4af1-4f01-ac73-771210f5a97f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.294012 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9a386a29-4af1-4f01-ac73-771210f5a97f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.296407 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9a386a29-4af1-4f01-ac73-771210f5a97f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.297720 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9a386a29-4af1-4f01-ac73-771210f5a97f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.300440 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.300475 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b6d1c620b879242608879e94752b6e1bafbbd2b66e575591b8e49353c60cb357/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.300965 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9a386a29-4af1-4f01-ac73-771210f5a97f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.300984 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9a386a29-4af1-4f01-ac73-771210f5a97f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.301502 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9a386a29-4af1-4f01-ac73-771210f5a97f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.301559 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9a386a29-4af1-4f01-ac73-771210f5a97f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.302664 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a386a29-4af1-4f01-ac73-771210f5a97f-config\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.303306 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9a386a29-4af1-4f01-ac73-771210f5a97f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.306166 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a386a29-4af1-4f01-ac73-771210f5a97f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.318225 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9a386a29-4af1-4f01-ac73-771210f5a97f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.319224 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm7ps\" (UniqueName: \"kubernetes.io/projected/9a386a29-4af1-4f01-ac73-771210f5a97f-kube-api-access-zm7ps\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.349898 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-k9b4f"] Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.373452 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9aa1-account-create-update-znt6h"] Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.373938 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-367e04e8-8eb1-4f1b-a386-7dedc2dc90cf\") pod \"prometheus-metric-storage-0\" (UID: \"9a386a29-4af1-4f01-ac73-771210f5a97f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.395942 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-s2sfc"] Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.422470 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-b6d3-account-create-update-pxk6l"] Jan 23 16:36:59 crc kubenswrapper[4718]: W0123 16:36:59.427370 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode891ff28_e439_4bee_8938_4f148f0b734d.slice/crio-46d030d550a645cbcd55aecff653383215abaa48fcc6760d05a79644db919ee0 WatchSource:0}: Error finding container 46d030d550a645cbcd55aecff653383215abaa48fcc6760d05a79644db919ee0: Status 404 returned error can't find the container with id 46d030d550a645cbcd55aecff653383215abaa48fcc6760d05a79644db919ee0 Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.458583 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-5lntz"] Jan 23 16:36:59 crc kubenswrapper[4718]: W0123 16:36:59.463957 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c2206d1_ef66_450d_a83a_9c7c17b8e96d.slice/crio-a930c26b9a2f945e82395bfbeea7f134561fb88fbfd01176c6420d4d02e7aa56 WatchSource:0}: Error finding container a930c26b9a2f945e82395bfbeea7f134561fb88fbfd01176c6420d4d02e7aa56: Status 404 returned error can't find the container with id a930c26b9a2f945e82395bfbeea7f134561fb88fbfd01176c6420d4d02e7aa56 Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.537516 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.542695 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-c9rfg-config-wbvfp"] Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.553753 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-c9rfg-config-wbvfp"] Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.676923 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-86cf-account-create-update-rzlzf"] Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.697805 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-l4rtr"] Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.705126 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b5d4-account-create-update-zvckr"] Jan 23 16:36:59 crc kubenswrapper[4718]: W0123 16:36:59.755545 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb87063a9_a1b8_4120_9770_f939e3e16a7b.slice/crio-aa4647fd108e1dff6bbd5e39aba0215c32c92f79cc3ad2a39fd1876f333b6b07 WatchSource:0}: Error finding container aa4647fd108e1dff6bbd5e39aba0215c32c92f79cc3ad2a39fd1876f333b6b07: Status 404 returned error can't find the container with id aa4647fd108e1dff6bbd5e39aba0215c32c92f79cc3ad2a39fd1876f333b6b07 Jan 23 16:36:59 crc kubenswrapper[4718]: I0123 16:36:59.922447 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.009499 4718 generic.go:334] "Generic (PLEG): container finished" podID="2f60062d-a297-484e-a230-41a8c9f8e5e4" containerID="f0d4990536a6dfd88ffba2bd09d8011bdc6fbcbefe8d58bb47df746480f20e2d" exitCode=0 Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.010133 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5lj86" event={"ID":"2f60062d-a297-484e-a230-41a8c9f8e5e4","Type":"ContainerDied","Data":"f0d4990536a6dfd88ffba2bd09d8011bdc6fbcbefe8d58bb47df746480f20e2d"} Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.025729 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5lntz" event={"ID":"e891ff28-e439-4bee-8938-4f148f0b734d","Type":"ContainerStarted","Data":"9bc0c103f1ca07aa38be9bd4661db3a8ceb2b993b8bd92ad1c2a22f7dbbbb239"} Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.025856 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5lntz" event={"ID":"e891ff28-e439-4bee-8938-4f148f0b734d","Type":"ContainerStarted","Data":"46d030d550a645cbcd55aecff653383215abaa48fcc6760d05a79644db919ee0"} Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.045297 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-5lntz" podStartSLOduration=5.045275171 podStartE2EDuration="5.045275171s" podCreationTimestamp="2026-01-23 16:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:37:00.041874868 +0000 UTC m=+1221.189116859" watchObservedRunningTime="2026-01-23 16:37:00.045275171 +0000 UTC m=+1221.192517152" Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.067338 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-s2sfc" event={"ID":"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8","Type":"ContainerStarted","Data":"bb82cf3603081ad67a9cb2b2531e5f6ce8830b992674fd3d6105753183f52fd3"} Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.068862 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b5d4-account-create-update-zvckr" event={"ID":"b87063a9-a1b8-4120-9770-f939e3e16a7b","Type":"ContainerStarted","Data":"aa4647fd108e1dff6bbd5e39aba0215c32c92f79cc3ad2a39fd1876f333b6b07"} Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.072289 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-86cf-account-create-update-rzlzf" event={"ID":"28234654-ae42-4f18-88ed-74e66f7b91a3","Type":"ContainerStarted","Data":"96ebc348619b64d2c1a799f573ab7397c06a3817481f206823b00a5d880d7ef6"} Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.075976 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-b6d3-account-create-update-pxk6l" event={"ID":"0c2206d1-ef66-450d-a83a-9c7c17b8e96d","Type":"ContainerStarted","Data":"feca561fd8b606f7d184cd45cd011d5b0fecdefafa244f915fa599d0e6940a3f"} Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.076019 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-b6d3-account-create-update-pxk6l" event={"ID":"0c2206d1-ef66-450d-a83a-9c7c17b8e96d","Type":"ContainerStarted","Data":"a930c26b9a2f945e82395bfbeea7f134561fb88fbfd01176c6420d4d02e7aa56"} Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.087414 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l4rtr" event={"ID":"febec68a-b3c0-44fe-853f-7978c5012430","Type":"ContainerStarted","Data":"0b210d6796c4cc390141c89d6205ff6436473802a7398819127604bbb2a6e665"} Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.096692 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-86cf-account-create-update-rzlzf" podStartSLOduration=5.096658717 podStartE2EDuration="5.096658717s" podCreationTimestamp="2026-01-23 16:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:37:00.08866601 +0000 UTC m=+1221.235908001" watchObservedRunningTime="2026-01-23 16:37:00.096658717 +0000 UTC m=+1221.243900718" Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.099535 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3383bbd9-d755-435c-9d57-c66c5cadaf09","Type":"ContainerStarted","Data":"0d9ff674682fe6aa8d0b77ddf2c28c96a1f312ae0e57b5531b698713ad0578d9"} Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.106621 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k9b4f" event={"ID":"e99e0f19-90ef-4836-a587-0e137f44c7bb","Type":"ContainerStarted","Data":"4749417a1d1a410fef9090a5374e4a3b4c6863f00118ba70d47ce688475f1031"} Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.106807 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k9b4f" event={"ID":"e99e0f19-90ef-4836-a587-0e137f44c7bb","Type":"ContainerStarted","Data":"125ffd444f1fcd8f271bc6f0f18fee2a49119ef9499facefdf53c9182349ea4d"} Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.109308 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9aa1-account-create-update-znt6h" event={"ID":"f3dc1ea4-e411-4a36-ace6-9c026462f7d0","Type":"ContainerStarted","Data":"657997542b08555325b68c9d865a781d35f4875c890986e8685764a61cf7b0fd"} Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.109334 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9aa1-account-create-update-znt6h" event={"ID":"f3dc1ea4-e411-4a36-ace6-9c026462f7d0","Type":"ContainerStarted","Data":"b3d5b4558403994bb5ea592f5492312a1b706464e8f56ea947e55875e649ad4e"} Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.124127 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-b6d3-account-create-update-pxk6l" podStartSLOduration=5.124097183 podStartE2EDuration="5.124097183s" podCreationTimestamp="2026-01-23 16:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:37:00.103291397 +0000 UTC m=+1221.250533388" watchObservedRunningTime="2026-01-23 16:37:00.124097183 +0000 UTC m=+1221.271339204" Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.139198 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-l4rtr" podStartSLOduration=6.139174852 podStartE2EDuration="6.139174852s" podCreationTimestamp="2026-01-23 16:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:37:00.125648974 +0000 UTC m=+1221.272890965" watchObservedRunningTime="2026-01-23 16:37:00.139174852 +0000 UTC m=+1221.286416843" Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.152278 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-k9b4f" podStartSLOduration=5.152253608 podStartE2EDuration="5.152253608s" podCreationTimestamp="2026-01-23 16:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:37:00.149560144 +0000 UTC m=+1221.296802135" watchObservedRunningTime="2026-01-23 16:37:00.152253608 +0000 UTC m=+1221.299495589" Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.184576 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-9aa1-account-create-update-znt6h" podStartSLOduration=6.184528464 podStartE2EDuration="6.184528464s" podCreationTimestamp="2026-01-23 16:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:37:00.16819402 +0000 UTC m=+1221.315436011" watchObservedRunningTime="2026-01-23 16:37:00.184528464 +0000 UTC m=+1221.331770445" Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.234763 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 16:37:00 crc kubenswrapper[4718]: W0123 16:37:00.256787 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a386a29_4af1_4f01_ac73_771210f5a97f.slice/crio-0f27716b39e2a9abe98b1e29840a0e46669c1ef9c6d47a9a3d8b790111f9453d WatchSource:0}: Error finding container 0f27716b39e2a9abe98b1e29840a0e46669c1ef9c6d47a9a3d8b790111f9453d: Status 404 returned error can't find the container with id 0f27716b39e2a9abe98b1e29840a0e46669c1ef9c6d47a9a3d8b790111f9453d Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.832765 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-d9hbt"] Jan 23 16:37:00 crc kubenswrapper[4718]: I0123 16:37:00.844266 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-d9hbt"] Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.119958 4718 generic.go:334] "Generic (PLEG): container finished" podID="0c2206d1-ef66-450d-a83a-9c7c17b8e96d" containerID="feca561fd8b606f7d184cd45cd011d5b0fecdefafa244f915fa599d0e6940a3f" exitCode=0 Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.120055 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-b6d3-account-create-update-pxk6l" event={"ID":"0c2206d1-ef66-450d-a83a-9c7c17b8e96d","Type":"ContainerDied","Data":"feca561fd8b606f7d184cd45cd011d5b0fecdefafa244f915fa599d0e6940a3f"} Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.122431 4718 generic.go:334] "Generic (PLEG): container finished" podID="febec68a-b3c0-44fe-853f-7978c5012430" containerID="1804ffa282c99d771644734d87712803fde542c04bd872b3637d41045dacb23a" exitCode=0 Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.122501 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l4rtr" event={"ID":"febec68a-b3c0-44fe-853f-7978c5012430","Type":"ContainerDied","Data":"1804ffa282c99d771644734d87712803fde542c04bd872b3637d41045dacb23a"} Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.127126 4718 generic.go:334] "Generic (PLEG): container finished" podID="e891ff28-e439-4bee-8938-4f148f0b734d" containerID="9bc0c103f1ca07aa38be9bd4661db3a8ceb2b993b8bd92ad1c2a22f7dbbbb239" exitCode=0 Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.127245 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5lntz" event={"ID":"e891ff28-e439-4bee-8938-4f148f0b734d","Type":"ContainerDied","Data":"9bc0c103f1ca07aa38be9bd4661db3a8ceb2b993b8bd92ad1c2a22f7dbbbb239"} Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.130548 4718 generic.go:334] "Generic (PLEG): container finished" podID="b87063a9-a1b8-4120-9770-f939e3e16a7b" containerID="3791419ffe808d0de7da234ef9f296849604382d618a3582e1957ca9d31224b1" exitCode=0 Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.130697 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b5d4-account-create-update-zvckr" event={"ID":"b87063a9-a1b8-4120-9770-f939e3e16a7b","Type":"ContainerDied","Data":"3791419ffe808d0de7da234ef9f296849604382d618a3582e1957ca9d31224b1"} Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.135540 4718 generic.go:334] "Generic (PLEG): container finished" podID="e99e0f19-90ef-4836-a587-0e137f44c7bb" containerID="4749417a1d1a410fef9090a5374e4a3b4c6863f00118ba70d47ce688475f1031" exitCode=0 Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.135648 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k9b4f" event={"ID":"e99e0f19-90ef-4836-a587-0e137f44c7bb","Type":"ContainerDied","Data":"4749417a1d1a410fef9090a5374e4a3b4c6863f00118ba70d47ce688475f1031"} Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.141537 4718 generic.go:334] "Generic (PLEG): container finished" podID="f3dc1ea4-e411-4a36-ace6-9c026462f7d0" containerID="657997542b08555325b68c9d865a781d35f4875c890986e8685764a61cf7b0fd" exitCode=0 Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.145391 4718 generic.go:334] "Generic (PLEG): container finished" podID="28234654-ae42-4f18-88ed-74e66f7b91a3" containerID="2a1fe088040d79cfbc4a658edb49b2d121b216c99aff74320d8f7c9c0e60d21c" exitCode=0 Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.155431 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="761ff9db-524f-41eb-9386-63e4d4338ba0" path="/var/lib/kubelet/pods/761ff9db-524f-41eb-9386-63e4d4338ba0/volumes" Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.156001 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8786710-6c66-4f87-9f8b-f0b1874c46e4" path="/var/lib/kubelet/pods/d8786710-6c66-4f87-9f8b-f0b1874c46e4/volumes" Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.156849 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9aa1-account-create-update-znt6h" event={"ID":"f3dc1ea4-e411-4a36-ace6-9c026462f7d0","Type":"ContainerDied","Data":"657997542b08555325b68c9d865a781d35f4875c890986e8685764a61cf7b0fd"} Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.156887 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9a386a29-4af1-4f01-ac73-771210f5a97f","Type":"ContainerStarted","Data":"0f27716b39e2a9abe98b1e29840a0e46669c1ef9c6d47a9a3d8b790111f9453d"} Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.156900 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-86cf-account-create-update-rzlzf" event={"ID":"28234654-ae42-4f18-88ed-74e66f7b91a3","Type":"ContainerDied","Data":"2a1fe088040d79cfbc4a658edb49b2d121b216c99aff74320d8f7c9c0e60d21c"} Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.751672 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5lj86" Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.866754 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 23 16:37:01 crc kubenswrapper[4718]: E0123 16:37:01.867301 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f60062d-a297-484e-a230-41a8c9f8e5e4" containerName="mariadb-database-create" Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.867315 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f60062d-a297-484e-a230-41a8c9f8e5e4" containerName="mariadb-database-create" Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.867784 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f60062d-a297-484e-a230-41a8c9f8e5e4" containerName="mariadb-database-create" Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.868836 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.877009 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.879641 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.897007 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f60062d-a297-484e-a230-41a8c9f8e5e4-operator-scripts\") pod \"2f60062d-a297-484e-a230-41a8c9f8e5e4\" (UID: \"2f60062d-a297-484e-a230-41a8c9f8e5e4\") " Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.898303 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s67gp\" (UniqueName: \"kubernetes.io/projected/2f60062d-a297-484e-a230-41a8c9f8e5e4-kube-api-access-s67gp\") pod \"2f60062d-a297-484e-a230-41a8c9f8e5e4\" (UID: \"2f60062d-a297-484e-a230-41a8c9f8e5e4\") " Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.900362 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f60062d-a297-484e-a230-41a8c9f8e5e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f60062d-a297-484e-a230-41a8c9f8e5e4" (UID: "2f60062d-a297-484e-a230-41a8c9f8e5e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:01 crc kubenswrapper[4718]: I0123 16:37:01.912166 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f60062d-a297-484e-a230-41a8c9f8e5e4-kube-api-access-s67gp" (OuterVolumeSpecName: "kube-api-access-s67gp") pod "2f60062d-a297-484e-a230-41a8c9f8e5e4" (UID: "2f60062d-a297-484e-a230-41a8c9f8e5e4"). InnerVolumeSpecName "kube-api-access-s67gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.003194 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533ff73f-ffa0-41ae-a58a-9ef5491270e6-config-data\") pod \"mysqld-exporter-0\" (UID: \"533ff73f-ffa0-41ae-a58a-9ef5491270e6\") " pod="openstack/mysqld-exporter-0" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.003292 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p59cf\" (UniqueName: \"kubernetes.io/projected/533ff73f-ffa0-41ae-a58a-9ef5491270e6-kube-api-access-p59cf\") pod \"mysqld-exporter-0\" (UID: \"533ff73f-ffa0-41ae-a58a-9ef5491270e6\") " pod="openstack/mysqld-exporter-0" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.003568 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533ff73f-ffa0-41ae-a58a-9ef5491270e6-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"533ff73f-ffa0-41ae-a58a-9ef5491270e6\") " pod="openstack/mysqld-exporter-0" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.004017 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f60062d-a297-484e-a230-41a8c9f8e5e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.004035 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s67gp\" (UniqueName: \"kubernetes.io/projected/2f60062d-a297-484e-a230-41a8c9f8e5e4-kube-api-access-s67gp\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.106943 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p59cf\" (UniqueName: \"kubernetes.io/projected/533ff73f-ffa0-41ae-a58a-9ef5491270e6-kube-api-access-p59cf\") pod \"mysqld-exporter-0\" (UID: \"533ff73f-ffa0-41ae-a58a-9ef5491270e6\") " pod="openstack/mysqld-exporter-0" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.107033 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533ff73f-ffa0-41ae-a58a-9ef5491270e6-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"533ff73f-ffa0-41ae-a58a-9ef5491270e6\") " pod="openstack/mysqld-exporter-0" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.107182 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533ff73f-ffa0-41ae-a58a-9ef5491270e6-config-data\") pod \"mysqld-exporter-0\" (UID: \"533ff73f-ffa0-41ae-a58a-9ef5491270e6\") " pod="openstack/mysqld-exporter-0" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.111945 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533ff73f-ffa0-41ae-a58a-9ef5491270e6-config-data\") pod \"mysqld-exporter-0\" (UID: \"533ff73f-ffa0-41ae-a58a-9ef5491270e6\") " pod="openstack/mysqld-exporter-0" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.121564 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533ff73f-ffa0-41ae-a58a-9ef5491270e6-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"533ff73f-ffa0-41ae-a58a-9ef5491270e6\") " pod="openstack/mysqld-exporter-0" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.134766 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p59cf\" (UniqueName: \"kubernetes.io/projected/533ff73f-ffa0-41ae-a58a-9ef5491270e6-kube-api-access-p59cf\") pod \"mysqld-exporter-0\" (UID: \"533ff73f-ffa0-41ae-a58a-9ef5491270e6\") " pod="openstack/mysqld-exporter-0" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.163443 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dmmwt"] Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.165092 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dmmwt" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.167603 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.181029 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dmmwt"] Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.210258 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5lj86" event={"ID":"2f60062d-a297-484e-a230-41a8c9f8e5e4","Type":"ContainerDied","Data":"9f3bbdf0d2195dea2c899493572e742d0f53c81f69b3cf34000620d3c58f6fc0"} Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.210317 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f3bbdf0d2195dea2c899493572e742d0f53c81f69b3cf34000620d3c58f6fc0" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.210459 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5lj86" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.216287 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3383bbd9-d755-435c-9d57-c66c5cadaf09","Type":"ContainerStarted","Data":"99ff4578a6a19683686e6247a509ef1b448b59f4651147ff52cbf9b90d074f10"} Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.216337 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3383bbd9-d755-435c-9d57-c66c5cadaf09","Type":"ContainerStarted","Data":"53db696865838518dd462d7645606c8cbbaa731c5be43b9b7c1295f6141af13a"} Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.284952 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.321396 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d329934d-534e-4bee-8e8f-caf46caec698-operator-scripts\") pod \"root-account-create-update-dmmwt\" (UID: \"d329934d-534e-4bee-8e8f-caf46caec698\") " pod="openstack/root-account-create-update-dmmwt" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.321553 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h5tt\" (UniqueName: \"kubernetes.io/projected/d329934d-534e-4bee-8e8f-caf46caec698-kube-api-access-2h5tt\") pod \"root-account-create-update-dmmwt\" (UID: \"d329934d-534e-4bee-8e8f-caf46caec698\") " pod="openstack/root-account-create-update-dmmwt" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.423244 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h5tt\" (UniqueName: \"kubernetes.io/projected/d329934d-534e-4bee-8e8f-caf46caec698-kube-api-access-2h5tt\") pod \"root-account-create-update-dmmwt\" (UID: \"d329934d-534e-4bee-8e8f-caf46caec698\") " pod="openstack/root-account-create-update-dmmwt" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.423418 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d329934d-534e-4bee-8e8f-caf46caec698-operator-scripts\") pod \"root-account-create-update-dmmwt\" (UID: \"d329934d-534e-4bee-8e8f-caf46caec698\") " pod="openstack/root-account-create-update-dmmwt" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.426885 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d329934d-534e-4bee-8e8f-caf46caec698-operator-scripts\") pod \"root-account-create-update-dmmwt\" (UID: \"d329934d-534e-4bee-8e8f-caf46caec698\") " pod="openstack/root-account-create-update-dmmwt" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.447130 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h5tt\" (UniqueName: \"kubernetes.io/projected/d329934d-534e-4bee-8e8f-caf46caec698-kube-api-access-2h5tt\") pod \"root-account-create-update-dmmwt\" (UID: \"d329934d-534e-4bee-8e8f-caf46caec698\") " pod="openstack/root-account-create-update-dmmwt" Jan 23 16:37:02 crc kubenswrapper[4718]: I0123 16:37:02.528191 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dmmwt" Jan 23 16:37:04 crc kubenswrapper[4718]: I0123 16:37:04.254770 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9a386a29-4af1-4f01-ac73-771210f5a97f","Type":"ContainerStarted","Data":"0a3b46b1ad837e4f47559198503e783d933d7e3a18e5204a88e267256b9dc9e1"} Jan 23 16:37:05 crc kubenswrapper[4718]: E0123 16:37:05.391294 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod119f9654_89f5_40ae_b93a_ddde420e1a51.slice/crio-403933d15cad4783d7cf89930ac90be7ab5a2bb3302192a4b1b313f7243fabe3\": RecentStats: unable to find data in memory cache]" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.334555 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-b6d3-account-create-update-pxk6l" event={"ID":"0c2206d1-ef66-450d-a83a-9c7c17b8e96d","Type":"ContainerDied","Data":"a930c26b9a2f945e82395bfbeea7f134561fb88fbfd01176c6420d4d02e7aa56"} Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.335239 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a930c26b9a2f945e82395bfbeea7f134561fb88fbfd01176c6420d4d02e7aa56" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.341919 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l4rtr" event={"ID":"febec68a-b3c0-44fe-853f-7978c5012430","Type":"ContainerDied","Data":"0b210d6796c4cc390141c89d6205ff6436473802a7398819127604bbb2a6e665"} Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.341960 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b210d6796c4cc390141c89d6205ff6436473802a7398819127604bbb2a6e665" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.344773 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5lntz" event={"ID":"e891ff28-e439-4bee-8938-4f148f0b734d","Type":"ContainerDied","Data":"46d030d550a645cbcd55aecff653383215abaa48fcc6760d05a79644db919ee0"} Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.344795 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46d030d550a645cbcd55aecff653383215abaa48fcc6760d05a79644db919ee0" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.346033 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b5d4-account-create-update-zvckr" event={"ID":"b87063a9-a1b8-4120-9770-f939e3e16a7b","Type":"ContainerDied","Data":"aa4647fd108e1dff6bbd5e39aba0215c32c92f79cc3ad2a39fd1876f333b6b07"} Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.346061 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa4647fd108e1dff6bbd5e39aba0215c32c92f79cc3ad2a39fd1876f333b6b07" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.347136 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k9b4f" event={"ID":"e99e0f19-90ef-4836-a587-0e137f44c7bb","Type":"ContainerDied","Data":"125ffd444f1fcd8f271bc6f0f18fee2a49119ef9499facefdf53c9182349ea4d"} Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.347151 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="125ffd444f1fcd8f271bc6f0f18fee2a49119ef9499facefdf53c9182349ea4d" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.348441 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9aa1-account-create-update-znt6h" event={"ID":"f3dc1ea4-e411-4a36-ace6-9c026462f7d0","Type":"ContainerDied","Data":"b3d5b4558403994bb5ea592f5492312a1b706464e8f56ea947e55875e649ad4e"} Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.348460 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3d5b4558403994bb5ea592f5492312a1b706464e8f56ea947e55875e649ad4e" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.349721 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-86cf-account-create-update-rzlzf" event={"ID":"28234654-ae42-4f18-88ed-74e66f7b91a3","Type":"ContainerDied","Data":"96ebc348619b64d2c1a799f573ab7397c06a3817481f206823b00a5d880d7ef6"} Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.349743 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96ebc348619b64d2c1a799f573ab7397c06a3817481f206823b00a5d880d7ef6" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.389401 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-86cf-account-create-update-rzlzf" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.438556 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5lntz" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.447789 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l4rtr" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.497438 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k9b4f" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.499854 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9aa1-account-create-update-znt6h" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.510651 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b6d3-account-create-update-pxk6l" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.524294 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b5d4-account-create-update-zvckr" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.553345 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94nwb\" (UniqueName: \"kubernetes.io/projected/febec68a-b3c0-44fe-853f-7978c5012430-kube-api-access-94nwb\") pod \"febec68a-b3c0-44fe-853f-7978c5012430\" (UID: \"febec68a-b3c0-44fe-853f-7978c5012430\") " Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.555276 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/febec68a-b3c0-44fe-853f-7978c5012430-operator-scripts\") pod \"febec68a-b3c0-44fe-853f-7978c5012430\" (UID: \"febec68a-b3c0-44fe-853f-7978c5012430\") " Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.555368 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e891ff28-e439-4bee-8938-4f148f0b734d-operator-scripts\") pod \"e891ff28-e439-4bee-8938-4f148f0b734d\" (UID: \"e891ff28-e439-4bee-8938-4f148f0b734d\") " Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.555894 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbf2v\" (UniqueName: \"kubernetes.io/projected/e891ff28-e439-4bee-8938-4f148f0b734d-kube-api-access-bbf2v\") pod \"e891ff28-e439-4bee-8938-4f148f0b734d\" (UID: \"e891ff28-e439-4bee-8938-4f148f0b734d\") " Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.556118 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28234654-ae42-4f18-88ed-74e66f7b91a3-operator-scripts\") pod \"28234654-ae42-4f18-88ed-74e66f7b91a3\" (UID: \"28234654-ae42-4f18-88ed-74e66f7b91a3\") " Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.557279 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/febec68a-b3c0-44fe-853f-7978c5012430-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "febec68a-b3c0-44fe-853f-7978c5012430" (UID: "febec68a-b3c0-44fe-853f-7978c5012430"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.557802 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk5k2\" (UniqueName: \"kubernetes.io/projected/28234654-ae42-4f18-88ed-74e66f7b91a3-kube-api-access-bk5k2\") pod \"28234654-ae42-4f18-88ed-74e66f7b91a3\" (UID: \"28234654-ae42-4f18-88ed-74e66f7b91a3\") " Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.557815 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e891ff28-e439-4bee-8938-4f148f0b734d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e891ff28-e439-4bee-8938-4f148f0b734d" (UID: "e891ff28-e439-4bee-8938-4f148f0b734d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.558259 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28234654-ae42-4f18-88ed-74e66f7b91a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28234654-ae42-4f18-88ed-74e66f7b91a3" (UID: "28234654-ae42-4f18-88ed-74e66f7b91a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.559747 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/febec68a-b3c0-44fe-853f-7978c5012430-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.559776 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e891ff28-e439-4bee-8938-4f148f0b734d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.559788 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28234654-ae42-4f18-88ed-74e66f7b91a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.563161 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/febec68a-b3c0-44fe-853f-7978c5012430-kube-api-access-94nwb" (OuterVolumeSpecName: "kube-api-access-94nwb") pod "febec68a-b3c0-44fe-853f-7978c5012430" (UID: "febec68a-b3c0-44fe-853f-7978c5012430"). InnerVolumeSpecName "kube-api-access-94nwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.565398 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28234654-ae42-4f18-88ed-74e66f7b91a3-kube-api-access-bk5k2" (OuterVolumeSpecName: "kube-api-access-bk5k2") pod "28234654-ae42-4f18-88ed-74e66f7b91a3" (UID: "28234654-ae42-4f18-88ed-74e66f7b91a3"). InnerVolumeSpecName "kube-api-access-bk5k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.573493 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e891ff28-e439-4bee-8938-4f148f0b734d-kube-api-access-bbf2v" (OuterVolumeSpecName: "kube-api-access-bbf2v") pod "e891ff28-e439-4bee-8938-4f148f0b734d" (UID: "e891ff28-e439-4bee-8938-4f148f0b734d"). InnerVolumeSpecName "kube-api-access-bbf2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.660812 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q2fq\" (UniqueName: \"kubernetes.io/projected/0c2206d1-ef66-450d-a83a-9c7c17b8e96d-kube-api-access-8q2fq\") pod \"0c2206d1-ef66-450d-a83a-9c7c17b8e96d\" (UID: \"0c2206d1-ef66-450d-a83a-9c7c17b8e96d\") " Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.661322 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7twt\" (UniqueName: \"kubernetes.io/projected/e99e0f19-90ef-4836-a587-0e137f44c7bb-kube-api-access-s7twt\") pod \"e99e0f19-90ef-4836-a587-0e137f44c7bb\" (UID: \"e99e0f19-90ef-4836-a587-0e137f44c7bb\") " Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.661481 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e99e0f19-90ef-4836-a587-0e137f44c7bb-operator-scripts\") pod \"e99e0f19-90ef-4836-a587-0e137f44c7bb\" (UID: \"e99e0f19-90ef-4836-a587-0e137f44c7bb\") " Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.661520 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b87063a9-a1b8-4120-9770-f939e3e16a7b-operator-scripts\") pod \"b87063a9-a1b8-4120-9770-f939e3e16a7b\" (UID: \"b87063a9-a1b8-4120-9770-f939e3e16a7b\") " Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.661573 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c2206d1-ef66-450d-a83a-9c7c17b8e96d-operator-scripts\") pod \"0c2206d1-ef66-450d-a83a-9c7c17b8e96d\" (UID: \"0c2206d1-ef66-450d-a83a-9c7c17b8e96d\") " Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.661732 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs2ls\" (UniqueName: \"kubernetes.io/projected/b87063a9-a1b8-4120-9770-f939e3e16a7b-kube-api-access-hs2ls\") pod \"b87063a9-a1b8-4120-9770-f939e3e16a7b\" (UID: \"b87063a9-a1b8-4120-9770-f939e3e16a7b\") " Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.661784 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6dt7\" (UniqueName: \"kubernetes.io/projected/f3dc1ea4-e411-4a36-ace6-9c026462f7d0-kube-api-access-m6dt7\") pod \"f3dc1ea4-e411-4a36-ace6-9c026462f7d0\" (UID: \"f3dc1ea4-e411-4a36-ace6-9c026462f7d0\") " Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.661806 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3dc1ea4-e411-4a36-ace6-9c026462f7d0-operator-scripts\") pod \"f3dc1ea4-e411-4a36-ace6-9c026462f7d0\" (UID: \"f3dc1ea4-e411-4a36-ace6-9c026462f7d0\") " Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.662227 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbf2v\" (UniqueName: \"kubernetes.io/projected/e891ff28-e439-4bee-8938-4f148f0b734d-kube-api-access-bbf2v\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.662244 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk5k2\" (UniqueName: \"kubernetes.io/projected/28234654-ae42-4f18-88ed-74e66f7b91a3-kube-api-access-bk5k2\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.662253 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94nwb\" (UniqueName: \"kubernetes.io/projected/febec68a-b3c0-44fe-853f-7978c5012430-kube-api-access-94nwb\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.662545 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c2206d1-ef66-450d-a83a-9c7c17b8e96d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c2206d1-ef66-450d-a83a-9c7c17b8e96d" (UID: "0c2206d1-ef66-450d-a83a-9c7c17b8e96d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.662623 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3dc1ea4-e411-4a36-ace6-9c026462f7d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f3dc1ea4-e411-4a36-ace6-9c026462f7d0" (UID: "f3dc1ea4-e411-4a36-ace6-9c026462f7d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.662661 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b87063a9-a1b8-4120-9770-f939e3e16a7b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b87063a9-a1b8-4120-9770-f939e3e16a7b" (UID: "b87063a9-a1b8-4120-9770-f939e3e16a7b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.663317 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e99e0f19-90ef-4836-a587-0e137f44c7bb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e99e0f19-90ef-4836-a587-0e137f44c7bb" (UID: "e99e0f19-90ef-4836-a587-0e137f44c7bb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.665056 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c2206d1-ef66-450d-a83a-9c7c17b8e96d-kube-api-access-8q2fq" (OuterVolumeSpecName: "kube-api-access-8q2fq") pod "0c2206d1-ef66-450d-a83a-9c7c17b8e96d" (UID: "0c2206d1-ef66-450d-a83a-9c7c17b8e96d"). InnerVolumeSpecName "kube-api-access-8q2fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.666190 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b87063a9-a1b8-4120-9770-f939e3e16a7b-kube-api-access-hs2ls" (OuterVolumeSpecName: "kube-api-access-hs2ls") pod "b87063a9-a1b8-4120-9770-f939e3e16a7b" (UID: "b87063a9-a1b8-4120-9770-f939e3e16a7b"). InnerVolumeSpecName "kube-api-access-hs2ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.666227 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e99e0f19-90ef-4836-a587-0e137f44c7bb-kube-api-access-s7twt" (OuterVolumeSpecName: "kube-api-access-s7twt") pod "e99e0f19-90ef-4836-a587-0e137f44c7bb" (UID: "e99e0f19-90ef-4836-a587-0e137f44c7bb"). InnerVolumeSpecName "kube-api-access-s7twt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.666997 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3dc1ea4-e411-4a36-ace6-9c026462f7d0-kube-api-access-m6dt7" (OuterVolumeSpecName: "kube-api-access-m6dt7") pod "f3dc1ea4-e411-4a36-ace6-9c026462f7d0" (UID: "f3dc1ea4-e411-4a36-ace6-9c026462f7d0"). InnerVolumeSpecName "kube-api-access-m6dt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.750419 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 23 16:37:06 crc kubenswrapper[4718]: W0123 16:37:06.760943 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod533ff73f_ffa0_41ae_a58a_9ef5491270e6.slice/crio-393376dd6e2d9aa58a79f00d3e5c91651d6950742d8dd7bf85ef1035d869e3b8 WatchSource:0}: Error finding container 393376dd6e2d9aa58a79f00d3e5c91651d6950742d8dd7bf85ef1035d869e3b8: Status 404 returned error can't find the container with id 393376dd6e2d9aa58a79f00d3e5c91651d6950742d8dd7bf85ef1035d869e3b8 Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.764559 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs2ls\" (UniqueName: \"kubernetes.io/projected/b87063a9-a1b8-4120-9770-f939e3e16a7b-kube-api-access-hs2ls\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.764588 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6dt7\" (UniqueName: \"kubernetes.io/projected/f3dc1ea4-e411-4a36-ace6-9c026462f7d0-kube-api-access-m6dt7\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.764602 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3dc1ea4-e411-4a36-ace6-9c026462f7d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.764613 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q2fq\" (UniqueName: \"kubernetes.io/projected/0c2206d1-ef66-450d-a83a-9c7c17b8e96d-kube-api-access-8q2fq\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.764624 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7twt\" (UniqueName: \"kubernetes.io/projected/e99e0f19-90ef-4836-a587-0e137f44c7bb-kube-api-access-s7twt\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.764667 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e99e0f19-90ef-4836-a587-0e137f44c7bb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.764679 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b87063a9-a1b8-4120-9770-f939e3e16a7b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.764690 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c2206d1-ef66-450d-a83a-9c7c17b8e96d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:06 crc kubenswrapper[4718]: I0123 16:37:06.857892 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dmmwt"] Jan 23 16:37:06 crc kubenswrapper[4718]: W0123 16:37:06.858936 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd329934d_534e_4bee_8e8f_caf46caec698.slice/crio-fc21591c82aa503218ac969ee488b71a09ffd9dc7620e8c8e12373278c1e6ff1 WatchSource:0}: Error finding container fc21591c82aa503218ac969ee488b71a09ffd9dc7620e8c8e12373278c1e6ff1: Status 404 returned error can't find the container with id fc21591c82aa503218ac969ee488b71a09ffd9dc7620e8c8e12373278c1e6ff1 Jan 23 16:37:07 crc kubenswrapper[4718]: I0123 16:37:07.363407 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"533ff73f-ffa0-41ae-a58a-9ef5491270e6","Type":"ContainerStarted","Data":"393376dd6e2d9aa58a79f00d3e5c91651d6950742d8dd7bf85ef1035d869e3b8"} Jan 23 16:37:07 crc kubenswrapper[4718]: I0123 16:37:07.365996 4718 generic.go:334] "Generic (PLEG): container finished" podID="d329934d-534e-4bee-8e8f-caf46caec698" containerID="2b16a67beadd9baf553b4ac099ee87ab37142c4add0e1e4b54f77d1d0de95131" exitCode=0 Jan 23 16:37:07 crc kubenswrapper[4718]: I0123 16:37:07.366055 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dmmwt" event={"ID":"d329934d-534e-4bee-8e8f-caf46caec698","Type":"ContainerDied","Data":"2b16a67beadd9baf553b4ac099ee87ab37142c4add0e1e4b54f77d1d0de95131"} Jan 23 16:37:07 crc kubenswrapper[4718]: I0123 16:37:07.366126 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dmmwt" event={"ID":"d329934d-534e-4bee-8e8f-caf46caec698","Type":"ContainerStarted","Data":"fc21591c82aa503218ac969ee488b71a09ffd9dc7620e8c8e12373278c1e6ff1"} Jan 23 16:37:07 crc kubenswrapper[4718]: I0123 16:37:07.369052 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-s2sfc" event={"ID":"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8","Type":"ContainerStarted","Data":"01d6a7d89e4e4fa76f26a2197178880a56f7ab533a47ffb4fdda5a7e34254d20"} Jan 23 16:37:07 crc kubenswrapper[4718]: I0123 16:37:07.373137 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3383bbd9-d755-435c-9d57-c66c5cadaf09","Type":"ContainerStarted","Data":"a4d0d1b9dfd8d6d647ed84490e9e8c8723f7d3adf13451f6693893a740597414"} Jan 23 16:37:07 crc kubenswrapper[4718]: I0123 16:37:07.373172 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3383bbd9-d755-435c-9d57-c66c5cadaf09","Type":"ContainerStarted","Data":"3e3ba28bf7f537fde9c1709742920eda897156320721b33e1d505fc17868a3f2"} Jan 23 16:37:07 crc kubenswrapper[4718]: I0123 16:37:07.373221 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b5d4-account-create-update-zvckr" Jan 23 16:37:07 crc kubenswrapper[4718]: I0123 16:37:07.373254 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-86cf-account-create-update-rzlzf" Jan 23 16:37:07 crc kubenswrapper[4718]: I0123 16:37:07.373279 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9aa1-account-create-update-znt6h" Jan 23 16:37:07 crc kubenswrapper[4718]: I0123 16:37:07.373268 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b6d3-account-create-update-pxk6l" Jan 23 16:37:07 crc kubenswrapper[4718]: I0123 16:37:07.373338 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5lntz" Jan 23 16:37:07 crc kubenswrapper[4718]: I0123 16:37:07.373363 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l4rtr" Jan 23 16:37:07 crc kubenswrapper[4718]: I0123 16:37:07.375829 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k9b4f" Jan 23 16:37:07 crc kubenswrapper[4718]: I0123 16:37:07.448352 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-s2sfc" podStartSLOduration=5.678588217 podStartE2EDuration="12.448327483s" podCreationTimestamp="2026-01-23 16:36:55 +0000 UTC" firstStartedPulling="2026-01-23 16:36:59.382407523 +0000 UTC m=+1220.529649514" lastFinishedPulling="2026-01-23 16:37:06.152146779 +0000 UTC m=+1227.299388780" observedRunningTime="2026-01-23 16:37:07.410871686 +0000 UTC m=+1228.558113707" watchObservedRunningTime="2026-01-23 16:37:07.448327483 +0000 UTC m=+1228.595569474" Jan 23 16:37:11 crc kubenswrapper[4718]: E0123 16:37:11.336112 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod119f9654_89f5_40ae_b93a_ddde420e1a51.slice/crio-403933d15cad4783d7cf89930ac90be7ab5a2bb3302192a4b1b313f7243fabe3\": RecentStats: unable to find data in memory cache]" Jan 23 16:37:11 crc kubenswrapper[4718]: I0123 16:37:11.404978 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dmmwt" Jan 23 16:37:11 crc kubenswrapper[4718]: I0123 16:37:11.443823 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dmmwt" event={"ID":"d329934d-534e-4bee-8e8f-caf46caec698","Type":"ContainerDied","Data":"fc21591c82aa503218ac969ee488b71a09ffd9dc7620e8c8e12373278c1e6ff1"} Jan 23 16:37:11 crc kubenswrapper[4718]: I0123 16:37:11.443879 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc21591c82aa503218ac969ee488b71a09ffd9dc7620e8c8e12373278c1e6ff1" Jan 23 16:37:11 crc kubenswrapper[4718]: I0123 16:37:11.443986 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dmmwt" Jan 23 16:37:11 crc kubenswrapper[4718]: I0123 16:37:11.502441 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d329934d-534e-4bee-8e8f-caf46caec698-operator-scripts\") pod \"d329934d-534e-4bee-8e8f-caf46caec698\" (UID: \"d329934d-534e-4bee-8e8f-caf46caec698\") " Jan 23 16:37:11 crc kubenswrapper[4718]: I0123 16:37:11.502716 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h5tt\" (UniqueName: \"kubernetes.io/projected/d329934d-534e-4bee-8e8f-caf46caec698-kube-api-access-2h5tt\") pod \"d329934d-534e-4bee-8e8f-caf46caec698\" (UID: \"d329934d-534e-4bee-8e8f-caf46caec698\") " Jan 23 16:37:11 crc kubenswrapper[4718]: I0123 16:37:11.503596 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d329934d-534e-4bee-8e8f-caf46caec698-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d329934d-534e-4bee-8e8f-caf46caec698" (UID: "d329934d-534e-4bee-8e8f-caf46caec698"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:11 crc kubenswrapper[4718]: I0123 16:37:11.512015 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d329934d-534e-4bee-8e8f-caf46caec698-kube-api-access-2h5tt" (OuterVolumeSpecName: "kube-api-access-2h5tt") pod "d329934d-534e-4bee-8e8f-caf46caec698" (UID: "d329934d-534e-4bee-8e8f-caf46caec698"). InnerVolumeSpecName "kube-api-access-2h5tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:11 crc kubenswrapper[4718]: I0123 16:37:11.606263 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h5tt\" (UniqueName: \"kubernetes.io/projected/d329934d-534e-4bee-8e8f-caf46caec698-kube-api-access-2h5tt\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:11 crc kubenswrapper[4718]: I0123 16:37:11.606610 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d329934d-534e-4bee-8e8f-caf46caec698-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:12 crc kubenswrapper[4718]: I0123 16:37:12.457845 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3383bbd9-d755-435c-9d57-c66c5cadaf09","Type":"ContainerStarted","Data":"306a028e1fb2bbcfe1173749f3870804bcd9c947035b38e104bb3020374a2986"} Jan 23 16:37:12 crc kubenswrapper[4718]: I0123 16:37:12.461442 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"533ff73f-ffa0-41ae-a58a-9ef5491270e6","Type":"ContainerStarted","Data":"15a4e11d21cd795ac7bbc3f7e3bb3d13d7f093434abde5bf189df2d14f85589a"} Jan 23 16:37:12 crc kubenswrapper[4718]: I0123 16:37:12.522584 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=6.242141291 podStartE2EDuration="11.522564287s" podCreationTimestamp="2026-01-23 16:37:01 +0000 UTC" firstStartedPulling="2026-01-23 16:37:06.767109106 +0000 UTC m=+1227.914351097" lastFinishedPulling="2026-01-23 16:37:12.047532102 +0000 UTC m=+1233.194774093" observedRunningTime="2026-01-23 16:37:12.520132512 +0000 UTC m=+1233.667374503" watchObservedRunningTime="2026-01-23 16:37:12.522564287 +0000 UTC m=+1233.669806278" Jan 23 16:37:13 crc kubenswrapper[4718]: I0123 16:37:13.479339 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3383bbd9-d755-435c-9d57-c66c5cadaf09","Type":"ContainerStarted","Data":"2163b903b9de793718602a6ca4d0b5b8203b0c81720a6351643e3d6b42ced7e2"} Jan 23 16:37:13 crc kubenswrapper[4718]: I0123 16:37:13.479419 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3383bbd9-d755-435c-9d57-c66c5cadaf09","Type":"ContainerStarted","Data":"302f0448bb33c1f1d81083b251eb67167d707075b98c14723c7482bc18e5309c"} Jan 23 16:37:13 crc kubenswrapper[4718]: I0123 16:37:13.479429 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3383bbd9-d755-435c-9d57-c66c5cadaf09","Type":"ContainerStarted","Data":"4b1d84ca1dc8cb00c636ebd8c7f2b17c38d6f78fc3b8dec00aacedf75851301e"} Jan 23 16:37:13 crc kubenswrapper[4718]: I0123 16:37:13.482069 4718 generic.go:334] "Generic (PLEG): container finished" podID="9a386a29-4af1-4f01-ac73-771210f5a97f" containerID="0a3b46b1ad837e4f47559198503e783d933d7e3a18e5204a88e267256b9dc9e1" exitCode=0 Jan 23 16:37:13 crc kubenswrapper[4718]: I0123 16:37:13.482159 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9a386a29-4af1-4f01-ac73-771210f5a97f","Type":"ContainerDied","Data":"0a3b46b1ad837e4f47559198503e783d933d7e3a18e5204a88e267256b9dc9e1"} Jan 23 16:37:13 crc kubenswrapper[4718]: I0123 16:37:13.484597 4718 generic.go:334] "Generic (PLEG): container finished" podID="d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8" containerID="01d6a7d89e4e4fa76f26a2197178880a56f7ab533a47ffb4fdda5a7e34254d20" exitCode=0 Jan 23 16:37:13 crc kubenswrapper[4718]: I0123 16:37:13.484685 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-s2sfc" event={"ID":"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8","Type":"ContainerDied","Data":"01d6a7d89e4e4fa76f26a2197178880a56f7ab533a47ffb4fdda5a7e34254d20"} Jan 23 16:37:14 crc kubenswrapper[4718]: I0123 16:37:14.504056 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9a386a29-4af1-4f01-ac73-771210f5a97f","Type":"ContainerStarted","Data":"8e5dbd5015a9814908d0111db98c821e32031f887521786ac1b00392780599c1"} Jan 23 16:37:14 crc kubenswrapper[4718]: I0123 16:37:14.942457 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-s2sfc" Jan 23 16:37:14 crc kubenswrapper[4718]: I0123 16:37:14.992572 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6s97\" (UniqueName: \"kubernetes.io/projected/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8-kube-api-access-l6s97\") pod \"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8\" (UID: \"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8\") " Jan 23 16:37:14 crc kubenswrapper[4718]: I0123 16:37:14.992769 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8-combined-ca-bundle\") pod \"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8\" (UID: \"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8\") " Jan 23 16:37:14 crc kubenswrapper[4718]: I0123 16:37:14.993312 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8-config-data\") pod \"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8\" (UID: \"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8\") " Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.043814 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8-kube-api-access-l6s97" (OuterVolumeSpecName: "kube-api-access-l6s97") pod "d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8" (UID: "d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8"). InnerVolumeSpecName "kube-api-access-l6s97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.080624 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8" (UID: "d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.095751 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6s97\" (UniqueName: \"kubernetes.io/projected/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8-kube-api-access-l6s97\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.095873 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.111442 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8-config-data" (OuterVolumeSpecName: "config-data") pod "d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8" (UID: "d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.199876 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:15 crc kubenswrapper[4718]: E0123 16:37:15.455544 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod119f9654_89f5_40ae_b93a_ddde420e1a51.slice/crio-403933d15cad4783d7cf89930ac90be7ab5a2bb3302192a4b1b313f7243fabe3\": RecentStats: unable to find data in memory cache]" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.521089 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-s2sfc" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.521128 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-s2sfc" event={"ID":"d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8","Type":"ContainerDied","Data":"bb82cf3603081ad67a9cb2b2531e5f6ce8830b992674fd3d6105753183f52fd3"} Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.523495 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb82cf3603081ad67a9cb2b2531e5f6ce8830b992674fd3d6105753183f52fd3" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.527418 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3383bbd9-d755-435c-9d57-c66c5cadaf09","Type":"ContainerStarted","Data":"81a992ea5b92e282585c38d75c33199fd3405f59127772954a5ede5e5bd66fbb"} Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.527474 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3383bbd9-d755-435c-9d57-c66c5cadaf09","Type":"ContainerStarted","Data":"1d89cdae5c11556a3146b2e5e83e6a5b6e244cde7018ae6565f4abb7aaaf1b45"} Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.527490 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3383bbd9-d755-435c-9d57-c66c5cadaf09","Type":"ContainerStarted","Data":"5309e0b4cf655fe760a567d7d9a97035b78b222dbde8192d6d984f0468451305"} Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.922554 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-wmq5w"] Jan 23 16:37:15 crc kubenswrapper[4718]: E0123 16:37:15.924211 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d329934d-534e-4bee-8e8f-caf46caec698" containerName="mariadb-account-create-update" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.924230 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d329934d-534e-4bee-8e8f-caf46caec698" containerName="mariadb-account-create-update" Jan 23 16:37:15 crc kubenswrapper[4718]: E0123 16:37:15.924243 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28234654-ae42-4f18-88ed-74e66f7b91a3" containerName="mariadb-account-create-update" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.924250 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="28234654-ae42-4f18-88ed-74e66f7b91a3" containerName="mariadb-account-create-update" Jan 23 16:37:15 crc kubenswrapper[4718]: E0123 16:37:15.924266 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e891ff28-e439-4bee-8938-4f148f0b734d" containerName="mariadb-database-create" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.924273 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e891ff28-e439-4bee-8938-4f148f0b734d" containerName="mariadb-database-create" Jan 23 16:37:15 crc kubenswrapper[4718]: E0123 16:37:15.924297 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e99e0f19-90ef-4836-a587-0e137f44c7bb" containerName="mariadb-database-create" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.924303 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e99e0f19-90ef-4836-a587-0e137f44c7bb" containerName="mariadb-database-create" Jan 23 16:37:15 crc kubenswrapper[4718]: E0123 16:37:15.924327 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8" containerName="keystone-db-sync" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.924334 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8" containerName="keystone-db-sync" Jan 23 16:37:15 crc kubenswrapper[4718]: E0123 16:37:15.924354 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3dc1ea4-e411-4a36-ace6-9c026462f7d0" containerName="mariadb-account-create-update" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.924364 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3dc1ea4-e411-4a36-ace6-9c026462f7d0" containerName="mariadb-account-create-update" Jan 23 16:37:15 crc kubenswrapper[4718]: E0123 16:37:15.924376 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2206d1-ef66-450d-a83a-9c7c17b8e96d" containerName="mariadb-account-create-update" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.924383 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2206d1-ef66-450d-a83a-9c7c17b8e96d" containerName="mariadb-account-create-update" Jan 23 16:37:15 crc kubenswrapper[4718]: E0123 16:37:15.924433 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febec68a-b3c0-44fe-853f-7978c5012430" containerName="mariadb-database-create" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.924439 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="febec68a-b3c0-44fe-853f-7978c5012430" containerName="mariadb-database-create" Jan 23 16:37:15 crc kubenswrapper[4718]: E0123 16:37:15.924457 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87063a9-a1b8-4120-9770-f939e3e16a7b" containerName="mariadb-account-create-update" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.924463 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87063a9-a1b8-4120-9770-f939e3e16a7b" containerName="mariadb-account-create-update" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.924975 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="febec68a-b3c0-44fe-853f-7978c5012430" containerName="mariadb-database-create" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.925045 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e891ff28-e439-4bee-8938-4f148f0b734d" containerName="mariadb-database-create" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.925060 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3dc1ea4-e411-4a36-ace6-9c026462f7d0" containerName="mariadb-account-create-update" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.925091 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e99e0f19-90ef-4836-a587-0e137f44c7bb" containerName="mariadb-database-create" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.925107 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b87063a9-a1b8-4120-9770-f939e3e16a7b" containerName="mariadb-account-create-update" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.925120 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8" containerName="keystone-db-sync" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.925129 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d329934d-534e-4bee-8e8f-caf46caec698" containerName="mariadb-account-create-update" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.925148 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="28234654-ae42-4f18-88ed-74e66f7b91a3" containerName="mariadb-account-create-update" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.925172 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c2206d1-ef66-450d-a83a-9c7c17b8e96d" containerName="mariadb-account-create-update" Jan 23 16:37:15 crc kubenswrapper[4718]: I0123 16:37:15.928331 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:15.997732 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vfz5m"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.001098 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.040130 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.040448 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.040611 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.040877 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4m7cb" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.053832 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.075121 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-wmq5w"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.133720 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vfz5m"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.146427 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-config-data\") pod \"keystone-bootstrap-vfz5m\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.146490 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-combined-ca-bundle\") pod \"keystone-bootstrap-vfz5m\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.146574 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-wmq5w\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.146610 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-scripts\") pod \"keystone-bootstrap-vfz5m\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.146873 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-wmq5w\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.149435 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-wmq5w\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.149498 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-credential-keys\") pod \"keystone-bootstrap-vfz5m\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.149534 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm8pv\" (UniqueName: \"kubernetes.io/projected/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-kube-api-access-zm8pv\") pod \"dnsmasq-dns-5c9d85d47c-wmq5w\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.149572 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-config\") pod \"dnsmasq-dns-5c9d85d47c-wmq5w\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.149595 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-fernet-keys\") pod \"keystone-bootstrap-vfz5m\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.149639 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxkwx\" (UniqueName: \"kubernetes.io/projected/d8e0da39-3959-4a70-9be2-5267fed5165b-kube-api-access-dxkwx\") pod \"keystone-bootstrap-vfz5m\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.229680 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dmmwt"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.256265 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-config-data\") pod \"keystone-bootstrap-vfz5m\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.256335 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-combined-ca-bundle\") pod \"keystone-bootstrap-vfz5m\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.256451 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-wmq5w\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.256504 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-scripts\") pod \"keystone-bootstrap-vfz5m\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.256534 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-wmq5w\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.256576 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-wmq5w\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.256617 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-credential-keys\") pod \"keystone-bootstrap-vfz5m\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.256665 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm8pv\" (UniqueName: \"kubernetes.io/projected/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-kube-api-access-zm8pv\") pod \"dnsmasq-dns-5c9d85d47c-wmq5w\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.256692 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-config\") pod \"dnsmasq-dns-5c9d85d47c-wmq5w\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.256716 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-fernet-keys\") pod \"keystone-bootstrap-vfz5m\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.256737 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxkwx\" (UniqueName: \"kubernetes.io/projected/d8e0da39-3959-4a70-9be2-5267fed5165b-kube-api-access-dxkwx\") pod \"keystone-bootstrap-vfz5m\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.258020 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-wmq5w\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.259922 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dmmwt"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.260847 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-wmq5w\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.261611 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-wmq5w\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.261761 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-config\") pod \"dnsmasq-dns-5c9d85d47c-wmq5w\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.289828 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-q47zv"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.291729 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-q47zv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.305268 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-sjrc6" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.306004 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.327744 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxkwx\" (UniqueName: \"kubernetes.io/projected/d8e0da39-3959-4a70-9be2-5267fed5165b-kube-api-access-dxkwx\") pod \"keystone-bootstrap-vfz5m\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.329579 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-q47zv"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.343111 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-combined-ca-bundle\") pod \"keystone-bootstrap-vfz5m\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.344147 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-config-data\") pod \"keystone-bootstrap-vfz5m\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.345138 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-credential-keys\") pod \"keystone-bootstrap-vfz5m\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.349782 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-scripts\") pod \"keystone-bootstrap-vfz5m\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.368057 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-b4crv"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.370137 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.376532 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-b4crv"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.376862 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kw66w" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.377187 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.377399 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.392777 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-vbzkx"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.394396 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vbzkx" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.402550 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.402795 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.402905 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nrl7q" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.415122 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vbzkx"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.433904 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-wmq5w"] Jan 23 16:37:16 crc kubenswrapper[4718]: E0123 16:37:16.434743 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-zm8pv], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" podUID="d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.462421 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-etc-machine-id\") pod \"cinder-db-sync-b4crv\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.462521 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-scripts\") pod \"cinder-db-sync-b4crv\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.462554 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-config-data\") pod \"cinder-db-sync-b4crv\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.462586 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-combined-ca-bundle\") pod \"cinder-db-sync-b4crv\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.462604 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d57ff1-707c-4dd4-8922-1d910f52faf8-combined-ca-bundle\") pod \"heat-db-sync-q47zv\" (UID: \"a3d57ff1-707c-4dd4-8922-1d910f52faf8\") " pod="openstack/heat-db-sync-q47zv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.462644 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksm4m\" (UniqueName: \"kubernetes.io/projected/a3d57ff1-707c-4dd4-8922-1d910f52faf8-kube-api-access-ksm4m\") pod \"heat-db-sync-q47zv\" (UID: \"a3d57ff1-707c-4dd4-8922-1d910f52faf8\") " pod="openstack/heat-db-sync-q47zv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.462687 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxc7b\" (UniqueName: \"kubernetes.io/projected/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-kube-api-access-lxc7b\") pod \"cinder-db-sync-b4crv\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.462750 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-db-sync-config-data\") pod \"cinder-db-sync-b4crv\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.462783 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d57ff1-707c-4dd4-8922-1d910f52faf8-config-data\") pod \"heat-db-sync-q47zv\" (UID: \"a3d57ff1-707c-4dd4-8922-1d910f52faf8\") " pod="openstack/heat-db-sync-q47zv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.476940 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-bvdgb"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.478599 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.486692 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.486989 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8sdrh" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.487117 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.503675 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-ht8pk"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.505760 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.525444 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bvdgb"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.550035 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm8pv\" (UniqueName: \"kubernetes.io/projected/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-kube-api-access-zm8pv\") pod \"dnsmasq-dns-5c9d85d47c-wmq5w\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.562585 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-fernet-keys\") pod \"keystone-bootstrap-vfz5m\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.567066 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-ht8pk"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.568592 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-combined-ca-bundle\") pod \"cinder-db-sync-b4crv\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.568681 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d57ff1-707c-4dd4-8922-1d910f52faf8-combined-ca-bundle\") pod \"heat-db-sync-q47zv\" (UID: \"a3d57ff1-707c-4dd4-8922-1d910f52faf8\") " pod="openstack/heat-db-sync-q47zv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.568720 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksm4m\" (UniqueName: \"kubernetes.io/projected/a3d57ff1-707c-4dd4-8922-1d910f52faf8-kube-api-access-ksm4m\") pod \"heat-db-sync-q47zv\" (UID: \"a3d57ff1-707c-4dd4-8922-1d910f52faf8\") " pod="openstack/heat-db-sync-q47zv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.568767 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxc7b\" (UniqueName: \"kubernetes.io/projected/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-kube-api-access-lxc7b\") pod \"cinder-db-sync-b4crv\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.568805 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2r9p\" (UniqueName: \"kubernetes.io/projected/2e4da531-c3ee-4c93-8de9-60f9c0ce9858-kube-api-access-z2r9p\") pod \"neutron-db-sync-vbzkx\" (UID: \"2e4da531-c3ee-4c93-8de9-60f9c0ce9858\") " pod="openstack/neutron-db-sync-vbzkx" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.568857 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beac9063-62c9-4cb1-aa45-786d02b1e9db-combined-ca-bundle\") pod \"placement-db-sync-bvdgb\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.568922 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-db-sync-config-data\") pod \"cinder-db-sync-b4crv\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.568963 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beac9063-62c9-4cb1-aa45-786d02b1e9db-logs\") pod \"placement-db-sync-bvdgb\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.568999 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d57ff1-707c-4dd4-8922-1d910f52faf8-config-data\") pod \"heat-db-sync-q47zv\" (UID: \"a3d57ff1-707c-4dd4-8922-1d910f52faf8\") " pod="openstack/heat-db-sync-q47zv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.569062 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgtxs\" (UniqueName: \"kubernetes.io/projected/beac9063-62c9-4cb1-aa45-786d02b1e9db-kube-api-access-qgtxs\") pod \"placement-db-sync-bvdgb\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.569146 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4da531-c3ee-4c93-8de9-60f9c0ce9858-combined-ca-bundle\") pod \"neutron-db-sync-vbzkx\" (UID: \"2e4da531-c3ee-4c93-8de9-60f9c0ce9858\") " pod="openstack/neutron-db-sync-vbzkx" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.569180 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e4da531-c3ee-4c93-8de9-60f9c0ce9858-config\") pod \"neutron-db-sync-vbzkx\" (UID: \"2e4da531-c3ee-4c93-8de9-60f9c0ce9858\") " pod="openstack/neutron-db-sync-vbzkx" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.569210 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-etc-machine-id\") pod \"cinder-db-sync-b4crv\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.569302 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-scripts\") pod \"cinder-db-sync-b4crv\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.569348 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beac9063-62c9-4cb1-aa45-786d02b1e9db-scripts\") pod \"placement-db-sync-bvdgb\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.569377 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beac9063-62c9-4cb1-aa45-786d02b1e9db-config-data\") pod \"placement-db-sync-bvdgb\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.569409 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-config-data\") pod \"cinder-db-sync-b4crv\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.573851 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-etc-machine-id\") pod \"cinder-db-sync-b4crv\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.586801 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-xpbng"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.588887 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xpbng" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.589232 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-combined-ca-bundle\") pod \"cinder-db-sync-b4crv\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.597117 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jds6x" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.597451 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.602548 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cl7vm" event={"ID":"fdefc9e4-c27d-4b53-a75a-5a74124d31f2","Type":"ContainerStarted","Data":"72f8578ac8c3194b0a4d4f079f4f7310bbac1f7801a1a5e1b3c146b4845377a2"} Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.607374 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxc7b\" (UniqueName: \"kubernetes.io/projected/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-kube-api-access-lxc7b\") pod \"cinder-db-sync-b4crv\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.617205 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.617339 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3383bbd9-d755-435c-9d57-c66c5cadaf09","Type":"ContainerStarted","Data":"2891491ae8810488b1cd619a1e5218d0d11677e4b674c22263d5218e7a0f5d84"} Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.617372 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3383bbd9-d755-435c-9d57-c66c5cadaf09","Type":"ContainerStarted","Data":"bf90a96e5a592e3057309adb1c3202b6511eea616789e16ca977fe93d68f67a1"} Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.619162 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-scripts\") pod \"cinder-db-sync-b4crv\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.620394 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksm4m\" (UniqueName: \"kubernetes.io/projected/a3d57ff1-707c-4dd4-8922-1d910f52faf8-kube-api-access-ksm4m\") pod \"heat-db-sync-q47zv\" (UID: \"a3d57ff1-707c-4dd4-8922-1d910f52faf8\") " pod="openstack/heat-db-sync-q47zv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.624656 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d57ff1-707c-4dd4-8922-1d910f52faf8-combined-ca-bundle\") pod \"heat-db-sync-q47zv\" (UID: \"a3d57ff1-707c-4dd4-8922-1d910f52faf8\") " pod="openstack/heat-db-sync-q47zv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.627340 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d57ff1-707c-4dd4-8922-1d910f52faf8-config-data\") pod \"heat-db-sync-q47zv\" (UID: \"a3d57ff1-707c-4dd4-8922-1d910f52faf8\") " pod="openstack/heat-db-sync-q47zv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.654040 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xpbng"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.655695 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-db-sync-config-data\") pod \"cinder-db-sync-b4crv\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.656891 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-config-data\") pod \"cinder-db-sync-b4crv\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.673881 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beac9063-62c9-4cb1-aa45-786d02b1e9db-scripts\") pod \"placement-db-sync-bvdgb\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.673939 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beac9063-62c9-4cb1-aa45-786d02b1e9db-config-data\") pod \"placement-db-sync-bvdgb\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.673996 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgxjj\" (UniqueName: \"kubernetes.io/projected/6b0b9f4b-8cdf-4a51-9248-27f370855dec-kube-api-access-xgxjj\") pod \"dnsmasq-dns-6ffb94d8ff-ht8pk\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.674048 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2r9p\" (UniqueName: \"kubernetes.io/projected/2e4da531-c3ee-4c93-8de9-60f9c0ce9858-kube-api-access-z2r9p\") pod \"neutron-db-sync-vbzkx\" (UID: \"2e4da531-c3ee-4c93-8de9-60f9c0ce9858\") " pod="openstack/neutron-db-sync-vbzkx" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.674076 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beac9063-62c9-4cb1-aa45-786d02b1e9db-combined-ca-bundle\") pod \"placement-db-sync-bvdgb\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.674124 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-ht8pk\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.674152 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beac9063-62c9-4cb1-aa45-786d02b1e9db-logs\") pod \"placement-db-sync-bvdgb\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.674192 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgtxs\" (UniqueName: \"kubernetes.io/projected/beac9063-62c9-4cb1-aa45-786d02b1e9db-kube-api-access-qgtxs\") pod \"placement-db-sync-bvdgb\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.674238 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-config\") pod \"dnsmasq-dns-6ffb94d8ff-ht8pk\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.674273 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4da531-c3ee-4c93-8de9-60f9c0ce9858-combined-ca-bundle\") pod \"neutron-db-sync-vbzkx\" (UID: \"2e4da531-c3ee-4c93-8de9-60f9c0ce9858\") " pod="openstack/neutron-db-sync-vbzkx" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.674291 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e4da531-c3ee-4c93-8de9-60f9c0ce9858-config\") pod \"neutron-db-sync-vbzkx\" (UID: \"2e4da531-c3ee-4c93-8de9-60f9c0ce9858\") " pod="openstack/neutron-db-sync-vbzkx" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.674338 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-ht8pk\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.674368 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-ht8pk\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.675322 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beac9063-62c9-4cb1-aa45-786d02b1e9db-logs\") pod \"placement-db-sync-bvdgb\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.684842 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4da531-c3ee-4c93-8de9-60f9c0ce9858-combined-ca-bundle\") pod \"neutron-db-sync-vbzkx\" (UID: \"2e4da531-c3ee-4c93-8de9-60f9c0ce9858\") " pod="openstack/neutron-db-sync-vbzkx" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.684879 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.687397 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beac9063-62c9-4cb1-aa45-786d02b1e9db-config-data\") pod \"placement-db-sync-bvdgb\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.710962 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2r9p\" (UniqueName: \"kubernetes.io/projected/2e4da531-c3ee-4c93-8de9-60f9c0ce9858-kube-api-access-z2r9p\") pod \"neutron-db-sync-vbzkx\" (UID: \"2e4da531-c3ee-4c93-8de9-60f9c0ce9858\") " pod="openstack/neutron-db-sync-vbzkx" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.713579 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beac9063-62c9-4cb1-aa45-786d02b1e9db-combined-ca-bundle\") pod \"placement-db-sync-bvdgb\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.726948 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgtxs\" (UniqueName: \"kubernetes.io/projected/beac9063-62c9-4cb1-aa45-786d02b1e9db-kube-api-access-qgtxs\") pod \"placement-db-sync-bvdgb\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.751396 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-cl7vm" podStartSLOduration=3.5613587239999998 podStartE2EDuration="37.751377448s" podCreationTimestamp="2026-01-23 16:36:39 +0000 UTC" firstStartedPulling="2026-01-23 16:36:40.489431827 +0000 UTC m=+1201.636673808" lastFinishedPulling="2026-01-23 16:37:14.679450541 +0000 UTC m=+1235.826692532" observedRunningTime="2026-01-23 16:37:16.635458802 +0000 UTC m=+1237.782700803" watchObservedRunningTime="2026-01-23 16:37:16.751377448 +0000 UTC m=+1237.898619439" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.764577 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beac9063-62c9-4cb1-aa45-786d02b1e9db-scripts\") pod \"placement-db-sync-bvdgb\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.775344 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e4da531-c3ee-4c93-8de9-60f9c0ce9858-config\") pod \"neutron-db-sync-vbzkx\" (UID: \"2e4da531-c3ee-4c93-8de9-60f9c0ce9858\") " pod="openstack/neutron-db-sync-vbzkx" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.776693 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c-db-sync-config-data\") pod \"barbican-db-sync-xpbng\" (UID: \"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c\") " pod="openstack/barbican-db-sync-xpbng" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.776742 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-config\") pod \"dnsmasq-dns-6ffb94d8ff-ht8pk\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.776814 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-ht8pk\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.776840 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-ht8pk\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.776907 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgxjj\" (UniqueName: \"kubernetes.io/projected/6b0b9f4b-8cdf-4a51-9248-27f370855dec-kube-api-access-xgxjj\") pod \"dnsmasq-dns-6ffb94d8ff-ht8pk\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.776965 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c-combined-ca-bundle\") pod \"barbican-db-sync-xpbng\" (UID: \"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c\") " pod="openstack/barbican-db-sync-xpbng" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.776996 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-ht8pk\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.777016 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m2sv\" (UniqueName: \"kubernetes.io/projected/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c-kube-api-access-9m2sv\") pod \"barbican-db-sync-xpbng\" (UID: \"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c\") " pod="openstack/barbican-db-sync-xpbng" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.781462 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-ht8pk\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.781553 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-config\") pod \"dnsmasq-dns-6ffb94d8ff-ht8pk\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.782153 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-ht8pk\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.782746 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-ht8pk\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.803005 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgxjj\" (UniqueName: \"kubernetes.io/projected/6b0b9f4b-8cdf-4a51-9248-27f370855dec-kube-api-access-xgxjj\") pod \"dnsmasq-dns-6ffb94d8ff-ht8pk\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.817199 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.841187 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.843978 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.844011 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.881142 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c-combined-ca-bundle\") pod \"barbican-db-sync-xpbng\" (UID: \"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c\") " pod="openstack/barbican-db-sync-xpbng" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.881213 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m2sv\" (UniqueName: \"kubernetes.io/projected/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c-kube-api-access-9m2sv\") pod \"barbican-db-sync-xpbng\" (UID: \"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c\") " pod="openstack/barbican-db-sync-xpbng" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.881285 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c-db-sync-config-data\") pod \"barbican-db-sync-xpbng\" (UID: \"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c\") " pod="openstack/barbican-db-sync-xpbng" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.886619 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.887682 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c-combined-ca-bundle\") pod \"barbican-db-sync-xpbng\" (UID: \"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c\") " pod="openstack/barbican-db-sync-xpbng" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.888095 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c-db-sync-config-data\") pod \"barbican-db-sync-xpbng\" (UID: \"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c\") " pod="openstack/barbican-db-sync-xpbng" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.902309 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m2sv\" (UniqueName: \"kubernetes.io/projected/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c-kube-api-access-9m2sv\") pod \"barbican-db-sync-xpbng\" (UID: \"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c\") " pod="openstack/barbican-db-sync-xpbng" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.986532 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmmgw\" (UniqueName: \"kubernetes.io/projected/d99c9a2d-5a68-454a-9516-24e28ef12bb5-kube-api-access-cmmgw\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.986613 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-config-data\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.986670 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.986709 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d99c9a2d-5a68-454a-9516-24e28ef12bb5-log-httpd\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.986856 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d99c9a2d-5a68-454a-9516-24e28ef12bb5-run-httpd\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.986883 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-scripts\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:16 crc kubenswrapper[4718]: I0123 16:37:16.986944 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.089157 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmmgw\" (UniqueName: \"kubernetes.io/projected/d99c9a2d-5a68-454a-9516-24e28ef12bb5-kube-api-access-cmmgw\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.089785 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-config-data\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.089818 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.089847 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d99c9a2d-5a68-454a-9516-24e28ef12bb5-log-httpd\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.089936 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d99c9a2d-5a68-454a-9516-24e28ef12bb5-run-httpd\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.089960 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-scripts\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.090001 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.091147 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d99c9a2d-5a68-454a-9516-24e28ef12bb5-log-httpd\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.091215 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d99c9a2d-5a68-454a-9516-24e28ef12bb5-run-httpd\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.098243 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.108935 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-config-data\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.113205 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmmgw\" (UniqueName: \"kubernetes.io/projected/d99c9a2d-5a68-454a-9516-24e28ef12bb5-kube-api-access-cmmgw\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.150666 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-scripts\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.152928 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " pod="openstack/ceilometer-0" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.163888 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d329934d-534e-4bee-8e8f-caf46caec698" path="/var/lib/kubelet/pods/d329934d-534e-4bee-8e8f-caf46caec698/volumes" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.269062 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-v6v9p"] Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.271157 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v6v9p" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.274541 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.284308 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-v6v9p"] Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.396940 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26f6559c-b048-4e80-8203-cad5a8f26c39-operator-scripts\") pod \"root-account-create-update-v6v9p\" (UID: \"26f6559c-b048-4e80-8203-cad5a8f26c39\") " pod="openstack/root-account-create-update-v6v9p" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.397023 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srskd\" (UniqueName: \"kubernetes.io/projected/26f6559c-b048-4e80-8203-cad5a8f26c39-kube-api-access-srskd\") pod \"root-account-create-update-v6v9p\" (UID: \"26f6559c-b048-4e80-8203-cad5a8f26c39\") " pod="openstack/root-account-create-update-v6v9p" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.437537 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vfz5m"] Jan 23 16:37:17 crc kubenswrapper[4718]: W0123 16:37:17.445109 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8e0da39_3959_4a70_9be2_5267fed5165b.slice/crio-e2a86ed9f7219ff264135e069d5f9ee60d4d641c1517c00fcec7eb6d1a0b5805 WatchSource:0}: Error finding container e2a86ed9f7219ff264135e069d5f9ee60d4d641c1517c00fcec7eb6d1a0b5805: Status 404 returned error can't find the container with id e2a86ed9f7219ff264135e069d5f9ee60d4d641c1517c00fcec7eb6d1a0b5805 Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.499504 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26f6559c-b048-4e80-8203-cad5a8f26c39-operator-scripts\") pod \"root-account-create-update-v6v9p\" (UID: \"26f6559c-b048-4e80-8203-cad5a8f26c39\") " pod="openstack/root-account-create-update-v6v9p" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.499563 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srskd\" (UniqueName: \"kubernetes.io/projected/26f6559c-b048-4e80-8203-cad5a8f26c39-kube-api-access-srskd\") pod \"root-account-create-update-v6v9p\" (UID: \"26f6559c-b048-4e80-8203-cad5a8f26c39\") " pod="openstack/root-account-create-update-v6v9p" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.500451 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26f6559c-b048-4e80-8203-cad5a8f26c39-operator-scripts\") pod \"root-account-create-update-v6v9p\" (UID: \"26f6559c-b048-4e80-8203-cad5a8f26c39\") " pod="openstack/root-account-create-update-v6v9p" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.543936 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srskd\" (UniqueName: \"kubernetes.io/projected/26f6559c-b048-4e80-8203-cad5a8f26c39-kube-api-access-srskd\") pod \"root-account-create-update-v6v9p\" (UID: \"26f6559c-b048-4e80-8203-cad5a8f26c39\") " pod="openstack/root-account-create-update-v6v9p" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.629335 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vfz5m" event={"ID":"d8e0da39-3959-4a70-9be2-5267fed5165b","Type":"ContainerStarted","Data":"e2a86ed9f7219ff264135e069d5f9ee60d4d641c1517c00fcec7eb6d1a0b5805"} Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.826801 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-q47zv" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.839530 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b4crv" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.850346 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vbzkx" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.867267 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.878927 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.888202 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xpbng" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.894161 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.912116 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:37:17 crc kubenswrapper[4718]: I0123 16:37:17.919224 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v6v9p" Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.011254 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm8pv\" (UniqueName: \"kubernetes.io/projected/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-kube-api-access-zm8pv\") pod \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.011838 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-ovsdbserver-sb\") pod \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.011931 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-ovsdbserver-nb\") pod \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.012079 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-dns-svc\") pod \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.012203 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-config\") pod \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\" (UID: \"d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7\") " Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.014171 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7" (UID: "d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.014466 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7" (UID: "d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.014733 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-config" (OuterVolumeSpecName: "config") pod "d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7" (UID: "d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.015425 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7" (UID: "d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.029769 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.029799 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.029810 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.029819 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.059721 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-kube-api-access-zm8pv" (OuterVolumeSpecName: "kube-api-access-zm8pv") pod "d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7" (UID: "d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7"). InnerVolumeSpecName "kube-api-access-zm8pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.133203 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm8pv\" (UniqueName: \"kubernetes.io/projected/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7-kube-api-access-zm8pv\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.655056 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vfz5m" event={"ID":"d8e0da39-3959-4a70-9be2-5267fed5165b","Type":"ContainerStarted","Data":"fe07bc6827bb0eb6fc80607def05b3a8260ef58ca7b9fcd88a659999a2fc20e5"} Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.658783 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.673395 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3383bbd9-d755-435c-9d57-c66c5cadaf09","Type":"ContainerStarted","Data":"d5488193835c822356f38d643262d40d6f89738c99adbfe6029caac41063185f"} Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.673458 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3383bbd9-d755-435c-9d57-c66c5cadaf09","Type":"ContainerStarted","Data":"bf04566eba42ae49b623fbae4f8772fe094bfd438e1fedd503a2fe22f9bfb8a9"} Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.675570 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-wmq5w" Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.681942 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9a386a29-4af1-4f01-ac73-771210f5a97f","Type":"ContainerStarted","Data":"f694a2072dfad585be1f0db50e973f6d2dceeeb966e7a8c2e9cee530a103d7a7"} Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.690493 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vfz5m" podStartSLOduration=3.690472029 podStartE2EDuration="3.690472029s" podCreationTimestamp="2026-01-23 16:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:37:18.682280667 +0000 UTC m=+1239.829522668" watchObservedRunningTime="2026-01-23 16:37:18.690472029 +0000 UTC m=+1239.837714020" Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.735749 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=48.315553541 podStartE2EDuration="1m2.735642815s" podCreationTimestamp="2026-01-23 16:36:16 +0000 UTC" firstStartedPulling="2026-01-23 16:36:59.889669684 +0000 UTC m=+1221.036911665" lastFinishedPulling="2026-01-23 16:37:14.309758928 +0000 UTC m=+1235.457000939" observedRunningTime="2026-01-23 16:37:18.722613422 +0000 UTC m=+1239.869855413" watchObservedRunningTime="2026-01-23 16:37:18.735642815 +0000 UTC m=+1239.882884806" Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.815804 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-wmq5w"] Jan 23 16:37:18 crc kubenswrapper[4718]: I0123 16:37:18.833333 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-wmq5w"] Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.120265 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-ht8pk"] Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.276387 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7" path="/var/lib/kubelet/pods/d4602a57-4fc8-4e07-beb7-02b3ecd6d9b7/volumes" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.312122 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-2kxbx"] Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.341065 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.347373 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.403312 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-ht8pk"] Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.469320 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdwpw\" (UniqueName: \"kubernetes.io/projected/1d0a620d-3a8d-418e-85d1-f0be169a3d48-kube-api-access-hdwpw\") pod \"dnsmasq-dns-cf78879c9-2kxbx\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.469849 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-dns-svc\") pod \"dnsmasq-dns-cf78879c9-2kxbx\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.469909 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-2kxbx\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.470172 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-2kxbx\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.470244 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-config\") pod \"dnsmasq-dns-cf78879c9-2kxbx\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.470398 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-2kxbx\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.474414 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-2kxbx"] Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.492089 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-q47zv"] Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.514450 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-b4crv"] Jan 23 16:37:19 crc kubenswrapper[4718]: W0123 16:37:19.518749 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d083ea0_8d2d_4ca7_8530_2f558ad8bf5c.slice/crio-1f2cec2b483ec096ba8d6c2f9987dfce771acc52a1d52a6fad0cdb0661ae2a95 WatchSource:0}: Error finding container 1f2cec2b483ec096ba8d6c2f9987dfce771acc52a1d52a6fad0cdb0661ae2a95: Status 404 returned error can't find the container with id 1f2cec2b483ec096ba8d6c2f9987dfce771acc52a1d52a6fad0cdb0661ae2a95 Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.531213 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vbzkx"] Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.551811 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xpbng"] Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.573241 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-2kxbx\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.573313 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-config\") pod \"dnsmasq-dns-cf78879c9-2kxbx\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.573388 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-2kxbx\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.573462 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdwpw\" (UniqueName: \"kubernetes.io/projected/1d0a620d-3a8d-418e-85d1-f0be169a3d48-kube-api-access-hdwpw\") pod \"dnsmasq-dns-cf78879c9-2kxbx\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.573481 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-dns-svc\") pod \"dnsmasq-dns-cf78879c9-2kxbx\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.573527 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-2kxbx\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.574310 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-2kxbx\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.574341 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-2kxbx\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.574833 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-2kxbx\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.575092 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-dns-svc\") pod \"dnsmasq-dns-cf78879c9-2kxbx\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.576838 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-config\") pod \"dnsmasq-dns-cf78879c9-2kxbx\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.580342 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.593062 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bvdgb"] Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.602173 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdwpw\" (UniqueName: \"kubernetes.io/projected/1d0a620d-3a8d-418e-85d1-f0be169a3d48-kube-api-access-hdwpw\") pod \"dnsmasq-dns-cf78879c9-2kxbx\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.605800 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.608873 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-v6v9p"] Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.697338 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vbzkx" event={"ID":"2e4da531-c3ee-4c93-8de9-60f9c0ce9858","Type":"ContainerStarted","Data":"5cb4bdc41934b52e7663df2b906ea5607abeaa30bc7504bc3e6956e9dfbbbfd1"} Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.701755 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" event={"ID":"6b0b9f4b-8cdf-4a51-9248-27f370855dec","Type":"ContainerStarted","Data":"8172f37142748294ed1eb3eeadd79475a4b89a00279266218a35d17ae39ed79e"} Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.704512 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b4crv" event={"ID":"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1","Type":"ContainerStarted","Data":"e0debcd7ae658e71a04915c0b43ace8edd448a28ddc3fc0e6b2625c9fd71c5de"} Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.705959 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-q47zv" event={"ID":"a3d57ff1-707c-4dd4-8922-1d910f52faf8","Type":"ContainerStarted","Data":"466ed69c7ce827179e9d767b0bf0d06fbd915ab6a696f9fa59bf6c116da3cd45"} Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.707561 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xpbng" event={"ID":"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c","Type":"ContainerStarted","Data":"1f2cec2b483ec096ba8d6c2f9987dfce771acc52a1d52a6fad0cdb0661ae2a95"} Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.712782 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9a386a29-4af1-4f01-ac73-771210f5a97f","Type":"ContainerStarted","Data":"1fa9f6fd89d06ded124f8b33824202c7c9e50ba9d4f978ef6d2319d1b2d7d0a7"} Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.716328 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d99c9a2d-5a68-454a-9516-24e28ef12bb5","Type":"ContainerStarted","Data":"aae0a33dbf1850e5000edda2c105909deaa6a8689bb6ce667d6edcb01a7cbd94"} Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.718843 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bvdgb" event={"ID":"beac9063-62c9-4cb1-aa45-786d02b1e9db","Type":"ContainerStarted","Data":"0926dd00466fa192a04c8f3825f57e488044edbadfa6a8627109d23fcd9bac85"} Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.721904 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v6v9p" event={"ID":"26f6559c-b048-4e80-8203-cad5a8f26c39","Type":"ContainerStarted","Data":"e3ea08bc655c403df89ee67c3c282a71bc24154880d9c84745825fbe49893a61"} Jan 23 16:37:19 crc kubenswrapper[4718]: I0123 16:37:19.753914 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.753885358 podStartE2EDuration="20.753885358s" podCreationTimestamp="2026-01-23 16:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:37:19.744974726 +0000 UTC m=+1240.892216727" watchObservedRunningTime="2026-01-23 16:37:19.753885358 +0000 UTC m=+1240.901127349" Jan 23 16:37:20 crc kubenswrapper[4718]: I0123 16:37:20.323817 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-2kxbx"] Jan 23 16:37:20 crc kubenswrapper[4718]: I0123 16:37:20.737920 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" event={"ID":"1d0a620d-3a8d-418e-85d1-f0be169a3d48","Type":"ContainerStarted","Data":"49f4cc5f9b62ed798ffcf456f23460c5d21206d7460b41d2aae53ebf5c5b93f5"} Jan 23 16:37:20 crc kubenswrapper[4718]: I0123 16:37:20.746135 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vbzkx" event={"ID":"2e4da531-c3ee-4c93-8de9-60f9c0ce9858","Type":"ContainerStarted","Data":"4e5cc4d921274cf3d9f106afe984955d127ec71f994dcb3d025e89edba83c862"} Jan 23 16:37:20 crc kubenswrapper[4718]: I0123 16:37:20.762926 4718 generic.go:334] "Generic (PLEG): container finished" podID="6b0b9f4b-8cdf-4a51-9248-27f370855dec" containerID="1c54ec580a61617eadb46d2367fa5724e26fce50742f49cd1d89000743e6e388" exitCode=0 Jan 23 16:37:20 crc kubenswrapper[4718]: I0123 16:37:20.763042 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" event={"ID":"6b0b9f4b-8cdf-4a51-9248-27f370855dec","Type":"ContainerDied","Data":"1c54ec580a61617eadb46d2367fa5724e26fce50742f49cd1d89000743e6e388"} Jan 23 16:37:20 crc kubenswrapper[4718]: I0123 16:37:20.769571 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-vbzkx" podStartSLOduration=4.76954882 podStartE2EDuration="4.76954882s" podCreationTimestamp="2026-01-23 16:37:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:37:20.763583378 +0000 UTC m=+1241.910825369" watchObservedRunningTime="2026-01-23 16:37:20.76954882 +0000 UTC m=+1241.916790811" Jan 23 16:37:20 crc kubenswrapper[4718]: I0123 16:37:20.769870 4718 generic.go:334] "Generic (PLEG): container finished" podID="26f6559c-b048-4e80-8203-cad5a8f26c39" containerID="369f7da4497093c2d3f133230f8d8a1cb01e50079afd3bf31dc655587357b169" exitCode=0 Jan 23 16:37:20 crc kubenswrapper[4718]: I0123 16:37:20.772487 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v6v9p" event={"ID":"26f6559c-b048-4e80-8203-cad5a8f26c39","Type":"ContainerDied","Data":"369f7da4497093c2d3f133230f8d8a1cb01e50079afd3bf31dc655587357b169"} Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.275210 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.447273 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-ovsdbserver-sb\") pod \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.447929 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgxjj\" (UniqueName: \"kubernetes.io/projected/6b0b9f4b-8cdf-4a51-9248-27f370855dec-kube-api-access-xgxjj\") pod \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.448001 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-config\") pod \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.448235 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-dns-svc\") pod \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.448458 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-ovsdbserver-nb\") pod \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\" (UID: \"6b0b9f4b-8cdf-4a51-9248-27f370855dec\") " Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.456949 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b0b9f4b-8cdf-4a51-9248-27f370855dec-kube-api-access-xgxjj" (OuterVolumeSpecName: "kube-api-access-xgxjj") pod "6b0b9f4b-8cdf-4a51-9248-27f370855dec" (UID: "6b0b9f4b-8cdf-4a51-9248-27f370855dec"). InnerVolumeSpecName "kube-api-access-xgxjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.489451 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6b0b9f4b-8cdf-4a51-9248-27f370855dec" (UID: "6b0b9f4b-8cdf-4a51-9248-27f370855dec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.494366 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-config" (OuterVolumeSpecName: "config") pod "6b0b9f4b-8cdf-4a51-9248-27f370855dec" (UID: "6b0b9f4b-8cdf-4a51-9248-27f370855dec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.500842 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6b0b9f4b-8cdf-4a51-9248-27f370855dec" (UID: "6b0b9f4b-8cdf-4a51-9248-27f370855dec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.531969 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6b0b9f4b-8cdf-4a51-9248-27f370855dec" (UID: "6b0b9f4b-8cdf-4a51-9248-27f370855dec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.560993 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgxjj\" (UniqueName: \"kubernetes.io/projected/6b0b9f4b-8cdf-4a51-9248-27f370855dec-kube-api-access-xgxjj\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.561470 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.561483 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.561508 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.561517 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b0b9f4b-8cdf-4a51-9248-27f370855dec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.792750 4718 generic.go:334] "Generic (PLEG): container finished" podID="1d0a620d-3a8d-418e-85d1-f0be169a3d48" containerID="78dc59f3476eb5ad02c7177482d05b22cac323ea057d4ae6ac6d9e6821bbfc07" exitCode=0 Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.792872 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" event={"ID":"1d0a620d-3a8d-418e-85d1-f0be169a3d48","Type":"ContainerDied","Data":"78dc59f3476eb5ad02c7177482d05b22cac323ea057d4ae6ac6d9e6821bbfc07"} Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.795561 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" event={"ID":"6b0b9f4b-8cdf-4a51-9248-27f370855dec","Type":"ContainerDied","Data":"8172f37142748294ed1eb3eeadd79475a4b89a00279266218a35d17ae39ed79e"} Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.795737 4718 scope.go:117] "RemoveContainer" containerID="1c54ec580a61617eadb46d2367fa5724e26fce50742f49cd1d89000743e6e388" Jan 23 16:37:21 crc kubenswrapper[4718]: I0123 16:37:21.796223 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-ht8pk" Jan 23 16:37:22 crc kubenswrapper[4718]: I0123 16:37:22.097465 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-ht8pk"] Jan 23 16:37:22 crc kubenswrapper[4718]: I0123 16:37:22.113421 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-ht8pk"] Jan 23 16:37:22 crc kubenswrapper[4718]: I0123 16:37:22.372365 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v6v9p" Jan 23 16:37:22 crc kubenswrapper[4718]: I0123 16:37:22.505547 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26f6559c-b048-4e80-8203-cad5a8f26c39-operator-scripts\") pod \"26f6559c-b048-4e80-8203-cad5a8f26c39\" (UID: \"26f6559c-b048-4e80-8203-cad5a8f26c39\") " Jan 23 16:37:22 crc kubenswrapper[4718]: I0123 16:37:22.505697 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srskd\" (UniqueName: \"kubernetes.io/projected/26f6559c-b048-4e80-8203-cad5a8f26c39-kube-api-access-srskd\") pod \"26f6559c-b048-4e80-8203-cad5a8f26c39\" (UID: \"26f6559c-b048-4e80-8203-cad5a8f26c39\") " Jan 23 16:37:22 crc kubenswrapper[4718]: I0123 16:37:22.507009 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26f6559c-b048-4e80-8203-cad5a8f26c39-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26f6559c-b048-4e80-8203-cad5a8f26c39" (UID: "26f6559c-b048-4e80-8203-cad5a8f26c39"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:22 crc kubenswrapper[4718]: I0123 16:37:22.519896 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26f6559c-b048-4e80-8203-cad5a8f26c39-kube-api-access-srskd" (OuterVolumeSpecName: "kube-api-access-srskd") pod "26f6559c-b048-4e80-8203-cad5a8f26c39" (UID: "26f6559c-b048-4e80-8203-cad5a8f26c39"). InnerVolumeSpecName "kube-api-access-srskd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:22 crc kubenswrapper[4718]: I0123 16:37:22.608883 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26f6559c-b048-4e80-8203-cad5a8f26c39-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:22 crc kubenswrapper[4718]: I0123 16:37:22.608938 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srskd\" (UniqueName: \"kubernetes.io/projected/26f6559c-b048-4e80-8203-cad5a8f26c39-kube-api-access-srskd\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:22 crc kubenswrapper[4718]: I0123 16:37:22.824852 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" event={"ID":"1d0a620d-3a8d-418e-85d1-f0be169a3d48","Type":"ContainerStarted","Data":"0f58d8ab7425a08edd31c56f4cde058daa496f5badbf94fa61c6ffa9300a8e84"} Jan 23 16:37:22 crc kubenswrapper[4718]: I0123 16:37:22.825934 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:22 crc kubenswrapper[4718]: I0123 16:37:22.831809 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v6v9p" event={"ID":"26f6559c-b048-4e80-8203-cad5a8f26c39","Type":"ContainerDied","Data":"e3ea08bc655c403df89ee67c3c282a71bc24154880d9c84745825fbe49893a61"} Jan 23 16:37:22 crc kubenswrapper[4718]: I0123 16:37:22.831854 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3ea08bc655c403df89ee67c3c282a71bc24154880d9c84745825fbe49893a61" Jan 23 16:37:22 crc kubenswrapper[4718]: I0123 16:37:22.831915 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v6v9p" Jan 23 16:37:22 crc kubenswrapper[4718]: I0123 16:37:22.853506 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" podStartSLOduration=3.853482262 podStartE2EDuration="3.853482262s" podCreationTimestamp="2026-01-23 16:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:37:22.853291617 +0000 UTC m=+1244.000533608" watchObservedRunningTime="2026-01-23 16:37:22.853482262 +0000 UTC m=+1244.000724243" Jan 23 16:37:23 crc kubenswrapper[4718]: I0123 16:37:23.161522 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b0b9f4b-8cdf-4a51-9248-27f370855dec" path="/var/lib/kubelet/pods/6b0b9f4b-8cdf-4a51-9248-27f370855dec/volumes" Jan 23 16:37:23 crc kubenswrapper[4718]: I0123 16:37:23.850274 4718 generic.go:334] "Generic (PLEG): container finished" podID="d8e0da39-3959-4a70-9be2-5267fed5165b" containerID="fe07bc6827bb0eb6fc80607def05b3a8260ef58ca7b9fcd88a659999a2fc20e5" exitCode=0 Jan 23 16:37:23 crc kubenswrapper[4718]: I0123 16:37:23.850365 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vfz5m" event={"ID":"d8e0da39-3959-4a70-9be2-5267fed5165b","Type":"ContainerDied","Data":"fe07bc6827bb0eb6fc80607def05b3a8260ef58ca7b9fcd88a659999a2fc20e5"} Jan 23 16:37:24 crc kubenswrapper[4718]: I0123 16:37:24.538416 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 16:37:25 crc kubenswrapper[4718]: E0123 16:37:25.780030 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod119f9654_89f5_40ae_b93a_ddde420e1a51.slice/crio-403933d15cad4783d7cf89930ac90be7ab5a2bb3302192a4b1b313f7243fabe3\": RecentStats: unable to find data in memory cache]" Jan 23 16:37:25 crc kubenswrapper[4718]: I0123 16:37:25.960715 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-v6v9p"] Jan 23 16:37:25 crc kubenswrapper[4718]: I0123 16:37:25.971886 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-v6v9p"] Jan 23 16:37:26 crc kubenswrapper[4718]: E0123 16:37:26.086547 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod119f9654_89f5_40ae_b93a_ddde420e1a51.slice/crio-403933d15cad4783d7cf89930ac90be7ab5a2bb3302192a4b1b313f7243fabe3\": RecentStats: unable to find data in memory cache]" Jan 23 16:37:27 crc kubenswrapper[4718]: I0123 16:37:27.156417 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26f6559c-b048-4e80-8203-cad5a8f26c39" path="/var/lib/kubelet/pods/26f6559c-b048-4e80-8203-cad5a8f26c39/volumes" Jan 23 16:37:27 crc kubenswrapper[4718]: I0123 16:37:27.326352 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-k9cg2"] Jan 23 16:37:27 crc kubenswrapper[4718]: E0123 16:37:27.327051 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26f6559c-b048-4e80-8203-cad5a8f26c39" containerName="mariadb-account-create-update" Jan 23 16:37:27 crc kubenswrapper[4718]: I0123 16:37:27.327076 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="26f6559c-b048-4e80-8203-cad5a8f26c39" containerName="mariadb-account-create-update" Jan 23 16:37:27 crc kubenswrapper[4718]: E0123 16:37:27.327101 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b0b9f4b-8cdf-4a51-9248-27f370855dec" containerName="init" Jan 23 16:37:27 crc kubenswrapper[4718]: I0123 16:37:27.327111 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b0b9f4b-8cdf-4a51-9248-27f370855dec" containerName="init" Jan 23 16:37:27 crc kubenswrapper[4718]: I0123 16:37:27.327416 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="26f6559c-b048-4e80-8203-cad5a8f26c39" containerName="mariadb-account-create-update" Jan 23 16:37:27 crc kubenswrapper[4718]: I0123 16:37:27.327445 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b0b9f4b-8cdf-4a51-9248-27f370855dec" containerName="init" Jan 23 16:37:27 crc kubenswrapper[4718]: I0123 16:37:27.328438 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k9cg2" Jan 23 16:37:27 crc kubenswrapper[4718]: I0123 16:37:27.331583 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 16:37:27 crc kubenswrapper[4718]: I0123 16:37:27.343471 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-k9cg2"] Jan 23 16:37:27 crc kubenswrapper[4718]: I0123 16:37:27.457119 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/272fb7e3-72ae-4687-9d58-88f35cdb18e2-operator-scripts\") pod \"root-account-create-update-k9cg2\" (UID: \"272fb7e3-72ae-4687-9d58-88f35cdb18e2\") " pod="openstack/root-account-create-update-k9cg2" Jan 23 16:37:27 crc kubenswrapper[4718]: I0123 16:37:27.457192 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbqkm\" (UniqueName: \"kubernetes.io/projected/272fb7e3-72ae-4687-9d58-88f35cdb18e2-kube-api-access-sbqkm\") pod \"root-account-create-update-k9cg2\" (UID: \"272fb7e3-72ae-4687-9d58-88f35cdb18e2\") " pod="openstack/root-account-create-update-k9cg2" Jan 23 16:37:27 crc kubenswrapper[4718]: I0123 16:37:27.560658 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/272fb7e3-72ae-4687-9d58-88f35cdb18e2-operator-scripts\") pod \"root-account-create-update-k9cg2\" (UID: \"272fb7e3-72ae-4687-9d58-88f35cdb18e2\") " pod="openstack/root-account-create-update-k9cg2" Jan 23 16:37:27 crc kubenswrapper[4718]: I0123 16:37:27.560824 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbqkm\" (UniqueName: \"kubernetes.io/projected/272fb7e3-72ae-4687-9d58-88f35cdb18e2-kube-api-access-sbqkm\") pod \"root-account-create-update-k9cg2\" (UID: \"272fb7e3-72ae-4687-9d58-88f35cdb18e2\") " pod="openstack/root-account-create-update-k9cg2" Jan 23 16:37:27 crc kubenswrapper[4718]: I0123 16:37:27.561741 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/272fb7e3-72ae-4687-9d58-88f35cdb18e2-operator-scripts\") pod \"root-account-create-update-k9cg2\" (UID: \"272fb7e3-72ae-4687-9d58-88f35cdb18e2\") " pod="openstack/root-account-create-update-k9cg2" Jan 23 16:37:27 crc kubenswrapper[4718]: I0123 16:37:27.588392 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbqkm\" (UniqueName: \"kubernetes.io/projected/272fb7e3-72ae-4687-9d58-88f35cdb18e2-kube-api-access-sbqkm\") pod \"root-account-create-update-k9cg2\" (UID: \"272fb7e3-72ae-4687-9d58-88f35cdb18e2\") " pod="openstack/root-account-create-update-k9cg2" Jan 23 16:37:27 crc kubenswrapper[4718]: I0123 16:37:27.651431 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k9cg2" Jan 23 16:37:28 crc kubenswrapper[4718]: I0123 16:37:28.875374 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:37:28 crc kubenswrapper[4718]: I0123 16:37:28.875899 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:37:28 crc kubenswrapper[4718]: I0123 16:37:28.924899 4718 generic.go:334] "Generic (PLEG): container finished" podID="fdefc9e4-c27d-4b53-a75a-5a74124d31f2" containerID="72f8578ac8c3194b0a4d4f079f4f7310bbac1f7801a1a5e1b3c146b4845377a2" exitCode=0 Jan 23 16:37:28 crc kubenswrapper[4718]: I0123 16:37:28.924993 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cl7vm" event={"ID":"fdefc9e4-c27d-4b53-a75a-5a74124d31f2","Type":"ContainerDied","Data":"72f8578ac8c3194b0a4d4f079f4f7310bbac1f7801a1a5e1b3c146b4845377a2"} Jan 23 16:37:29 crc kubenswrapper[4718]: I0123 16:37:29.539251 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 23 16:37:29 crc kubenswrapper[4718]: I0123 16:37:29.547834 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 23 16:37:29 crc kubenswrapper[4718]: I0123 16:37:29.608735 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:37:29 crc kubenswrapper[4718]: I0123 16:37:29.691801 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-r87qz"] Jan 23 16:37:29 crc kubenswrapper[4718]: I0123 16:37:29.692119 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" podUID="9914de17-33e2-4fea-a394-da364f4d8b43" containerName="dnsmasq-dns" containerID="cri-o://e6b1e90a9e018b18b4c1a13925be18abc569ba987a0407d362a78180763c5032" gracePeriod=10 Jan 23 16:37:29 crc kubenswrapper[4718]: I0123 16:37:29.942887 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 23 16:37:30 crc kubenswrapper[4718]: I0123 16:37:30.967250 4718 generic.go:334] "Generic (PLEG): container finished" podID="9914de17-33e2-4fea-a394-da364f4d8b43" containerID="e6b1e90a9e018b18b4c1a13925be18abc569ba987a0407d362a78180763c5032" exitCode=0 Jan 23 16:37:30 crc kubenswrapper[4718]: I0123 16:37:30.968569 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" event={"ID":"9914de17-33e2-4fea-a394-da364f4d8b43","Type":"ContainerDied","Data":"e6b1e90a9e018b18b4c1a13925be18abc569ba987a0407d362a78180763c5032"} Jan 23 16:37:31 crc kubenswrapper[4718]: I0123 16:37:31.632812 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" podUID="9914de17-33e2-4fea-a394-da364f4d8b43" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: connect: connection refused" Jan 23 16:37:34 crc kubenswrapper[4718]: E0123 16:37:34.812388 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 23 16:37:34 crc kubenswrapper[4718]: E0123 16:37:34.813334 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ksm4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-q47zv_openstack(a3d57ff1-707c-4dd4-8922-1d910f52faf8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:37:34 crc kubenswrapper[4718]: E0123 16:37:34.814716 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-q47zv" podUID="a3d57ff1-707c-4dd4-8922-1d910f52faf8" Jan 23 16:37:34 crc kubenswrapper[4718]: I0123 16:37:34.937312 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.003027 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-credential-keys\") pod \"d8e0da39-3959-4a70-9be2-5267fed5165b\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.003110 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxkwx\" (UniqueName: \"kubernetes.io/projected/d8e0da39-3959-4a70-9be2-5267fed5165b-kube-api-access-dxkwx\") pod \"d8e0da39-3959-4a70-9be2-5267fed5165b\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.003151 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-fernet-keys\") pod \"d8e0da39-3959-4a70-9be2-5267fed5165b\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.003200 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-config-data\") pod \"d8e0da39-3959-4a70-9be2-5267fed5165b\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.003224 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-scripts\") pod \"d8e0da39-3959-4a70-9be2-5267fed5165b\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.003265 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-combined-ca-bundle\") pod \"d8e0da39-3959-4a70-9be2-5267fed5165b\" (UID: \"d8e0da39-3959-4a70-9be2-5267fed5165b\") " Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.010503 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d8e0da39-3959-4a70-9be2-5267fed5165b" (UID: "d8e0da39-3959-4a70-9be2-5267fed5165b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.013042 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d8e0da39-3959-4a70-9be2-5267fed5165b" (UID: "d8e0da39-3959-4a70-9be2-5267fed5165b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.021713 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vfz5m" event={"ID":"d8e0da39-3959-4a70-9be2-5267fed5165b","Type":"ContainerDied","Data":"e2a86ed9f7219ff264135e069d5f9ee60d4d641c1517c00fcec7eb6d1a0b5805"} Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.021763 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vfz5m" Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.021781 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2a86ed9f7219ff264135e069d5f9ee60d4d641c1517c00fcec7eb6d1a0b5805" Jan 23 16:37:35 crc kubenswrapper[4718]: E0123 16:37:35.023421 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-q47zv" podUID="a3d57ff1-707c-4dd4-8922-1d910f52faf8" Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.026315 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8e0da39-3959-4a70-9be2-5267fed5165b-kube-api-access-dxkwx" (OuterVolumeSpecName: "kube-api-access-dxkwx") pod "d8e0da39-3959-4a70-9be2-5267fed5165b" (UID: "d8e0da39-3959-4a70-9be2-5267fed5165b"). InnerVolumeSpecName "kube-api-access-dxkwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.026684 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-scripts" (OuterVolumeSpecName: "scripts") pod "d8e0da39-3959-4a70-9be2-5267fed5165b" (UID: "d8e0da39-3959-4a70-9be2-5267fed5165b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.050613 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8e0da39-3959-4a70-9be2-5267fed5165b" (UID: "d8e0da39-3959-4a70-9be2-5267fed5165b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.051455 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-config-data" (OuterVolumeSpecName: "config-data") pod "d8e0da39-3959-4a70-9be2-5267fed5165b" (UID: "d8e0da39-3959-4a70-9be2-5267fed5165b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.109151 4718 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.109852 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxkwx\" (UniqueName: \"kubernetes.io/projected/d8e0da39-3959-4a70-9be2-5267fed5165b-kube-api-access-dxkwx\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.110010 4718 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.110082 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.110138 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:35 crc kubenswrapper[4718]: I0123 16:37:35.110304 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e0da39-3959-4a70-9be2-5267fed5165b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:35 crc kubenswrapper[4718]: E0123 16:37:35.485137 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 23 16:37:35 crc kubenswrapper[4718]: E0123 16:37:35.485314 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9m2sv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-xpbng_openstack(2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:37:35 crc kubenswrapper[4718]: E0123 16:37:35.486602 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-xpbng" podUID="2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.024881 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vfz5m"] Jan 23 16:37:36 crc kubenswrapper[4718]: E0123 16:37:36.032187 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-xpbng" podUID="2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.037548 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vfz5m"] Jan 23 16:37:36 crc kubenswrapper[4718]: E0123 16:37:36.131034 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod119f9654_89f5_40ae_b93a_ddde420e1a51.slice/crio-403933d15cad4783d7cf89930ac90be7ab5a2bb3302192a4b1b313f7243fabe3\": RecentStats: unable to find data in memory cache]" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.138872 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-9c6mc"] Jan 23 16:37:36 crc kubenswrapper[4718]: E0123 16:37:36.139476 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8e0da39-3959-4a70-9be2-5267fed5165b" containerName="keystone-bootstrap" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.139498 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8e0da39-3959-4a70-9be2-5267fed5165b" containerName="keystone-bootstrap" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.139737 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8e0da39-3959-4a70-9be2-5267fed5165b" containerName="keystone-bootstrap" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.140481 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.147341 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4m7cb" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.147516 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.147948 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.148114 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.160326 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9c6mc"] Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.160447 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.246430 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-fernet-keys\") pod \"keystone-bootstrap-9c6mc\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.246503 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-scripts\") pod \"keystone-bootstrap-9c6mc\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.246577 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-config-data\") pod \"keystone-bootstrap-9c6mc\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.246990 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzzkv\" (UniqueName: \"kubernetes.io/projected/27796574-e773-413a-9d32-beb6e99cd093-kube-api-access-rzzkv\") pod \"keystone-bootstrap-9c6mc\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.247495 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-combined-ca-bundle\") pod \"keystone-bootstrap-9c6mc\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.247581 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-credential-keys\") pod \"keystone-bootstrap-9c6mc\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.350964 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-fernet-keys\") pod \"keystone-bootstrap-9c6mc\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.351038 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-scripts\") pod \"keystone-bootstrap-9c6mc\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.351095 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-config-data\") pod \"keystone-bootstrap-9c6mc\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.351220 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzzkv\" (UniqueName: \"kubernetes.io/projected/27796574-e773-413a-9d32-beb6e99cd093-kube-api-access-rzzkv\") pod \"keystone-bootstrap-9c6mc\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.351333 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-combined-ca-bundle\") pod \"keystone-bootstrap-9c6mc\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.351365 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-credential-keys\") pod \"keystone-bootstrap-9c6mc\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.358110 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-config-data\") pod \"keystone-bootstrap-9c6mc\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.359837 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-credential-keys\") pod \"keystone-bootstrap-9c6mc\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.362088 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-scripts\") pod \"keystone-bootstrap-9c6mc\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.362154 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-fernet-keys\") pod \"keystone-bootstrap-9c6mc\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.362923 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-combined-ca-bundle\") pod \"keystone-bootstrap-9c6mc\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.370248 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzzkv\" (UniqueName: \"kubernetes.io/projected/27796574-e773-413a-9d32-beb6e99cd093-kube-api-access-rzzkv\") pod \"keystone-bootstrap-9c6mc\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.469493 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:36 crc kubenswrapper[4718]: I0123 16:37:36.632912 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" podUID="9914de17-33e2-4fea-a394-da364f4d8b43" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: connect: connection refused" Jan 23 16:37:37 crc kubenswrapper[4718]: I0123 16:37:37.158872 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8e0da39-3959-4a70-9be2-5267fed5165b" path="/var/lib/kubelet/pods/d8e0da39-3959-4a70-9be2-5267fed5165b/volumes" Jan 23 16:37:39 crc kubenswrapper[4718]: I0123 16:37:39.070532 4718 generic.go:334] "Generic (PLEG): container finished" podID="2e4da531-c3ee-4c93-8de9-60f9c0ce9858" containerID="4e5cc4d921274cf3d9f106afe984955d127ec71f994dcb3d025e89edba83c862" exitCode=0 Jan 23 16:37:39 crc kubenswrapper[4718]: I0123 16:37:39.071046 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vbzkx" event={"ID":"2e4da531-c3ee-4c93-8de9-60f9c0ce9858","Type":"ContainerDied","Data":"4e5cc4d921274cf3d9f106afe984955d127ec71f994dcb3d025e89edba83c862"} Jan 23 16:37:41 crc kubenswrapper[4718]: I0123 16:37:41.633312 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" podUID="9914de17-33e2-4fea-a394-da364f4d8b43" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: connect: connection refused" Jan 23 16:37:41 crc kubenswrapper[4718]: I0123 16:37:41.633975 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:37:42 crc kubenswrapper[4718]: I0123 16:37:42.650037 4718 scope.go:117] "RemoveContainer" containerID="689629a9eaa1761f7a59c7852e227d377e90c65aedf7990993b7613abf72a6ba" Jan 23 16:37:45 crc kubenswrapper[4718]: E0123 16:37:45.584364 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 23 16:37:45 crc kubenswrapper[4718]: E0123 16:37:45.585454 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n689hd4h569h555h5dh5b7h57hb7hd4h79h565hd6h558h9hd9h97h5c7hc5hc8h5b5h588h576h5cdh56dhbdh5b5h655h86h5d8h654h5d6h67bq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmmgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d99c9a2d-5a68-454a-9516-24e28ef12bb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.631944 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cl7vm" Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.639694 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vbzkx" Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.710553 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2r9p\" (UniqueName: \"kubernetes.io/projected/2e4da531-c3ee-4c93-8de9-60f9c0ce9858-kube-api-access-z2r9p\") pod \"2e4da531-c3ee-4c93-8de9-60f9c0ce9858\" (UID: \"2e4da531-c3ee-4c93-8de9-60f9c0ce9858\") " Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.710823 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pltf\" (UniqueName: \"kubernetes.io/projected/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-kube-api-access-7pltf\") pod \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\" (UID: \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\") " Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.710945 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e4da531-c3ee-4c93-8de9-60f9c0ce9858-config\") pod \"2e4da531-c3ee-4c93-8de9-60f9c0ce9858\" (UID: \"2e4da531-c3ee-4c93-8de9-60f9c0ce9858\") " Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.711018 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-config-data\") pod \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\" (UID: \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\") " Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.711065 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-db-sync-config-data\") pod \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\" (UID: \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\") " Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.711099 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4da531-c3ee-4c93-8de9-60f9c0ce9858-combined-ca-bundle\") pod \"2e4da531-c3ee-4c93-8de9-60f9c0ce9858\" (UID: \"2e4da531-c3ee-4c93-8de9-60f9c0ce9858\") " Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.711178 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-combined-ca-bundle\") pod \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\" (UID: \"fdefc9e4-c27d-4b53-a75a-5a74124d31f2\") " Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.741930 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-kube-api-access-7pltf" (OuterVolumeSpecName: "kube-api-access-7pltf") pod "fdefc9e4-c27d-4b53-a75a-5a74124d31f2" (UID: "fdefc9e4-c27d-4b53-a75a-5a74124d31f2"). InnerVolumeSpecName "kube-api-access-7pltf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.744851 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fdefc9e4-c27d-4b53-a75a-5a74124d31f2" (UID: "fdefc9e4-c27d-4b53-a75a-5a74124d31f2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.762409 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e4da531-c3ee-4c93-8de9-60f9c0ce9858-kube-api-access-z2r9p" (OuterVolumeSpecName: "kube-api-access-z2r9p") pod "2e4da531-c3ee-4c93-8de9-60f9c0ce9858" (UID: "2e4da531-c3ee-4c93-8de9-60f9c0ce9858"). InnerVolumeSpecName "kube-api-access-z2r9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.767335 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4da531-c3ee-4c93-8de9-60f9c0ce9858-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e4da531-c3ee-4c93-8de9-60f9c0ce9858" (UID: "2e4da531-c3ee-4c93-8de9-60f9c0ce9858"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.774962 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdefc9e4-c27d-4b53-a75a-5a74124d31f2" (UID: "fdefc9e4-c27d-4b53-a75a-5a74124d31f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.776318 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4da531-c3ee-4c93-8de9-60f9c0ce9858-config" (OuterVolumeSpecName: "config") pod "2e4da531-c3ee-4c93-8de9-60f9c0ce9858" (UID: "2e4da531-c3ee-4c93-8de9-60f9c0ce9858"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.809853 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-config-data" (OuterVolumeSpecName: "config-data") pod "fdefc9e4-c27d-4b53-a75a-5a74124d31f2" (UID: "fdefc9e4-c27d-4b53-a75a-5a74124d31f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.814680 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e4da531-c3ee-4c93-8de9-60f9c0ce9858-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.814738 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.814752 4718 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.814764 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4da531-c3ee-4c93-8de9-60f9c0ce9858-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.814773 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.814781 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2r9p\" (UniqueName: \"kubernetes.io/projected/2e4da531-c3ee-4c93-8de9-60f9c0ce9858-kube-api-access-z2r9p\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:45 crc kubenswrapper[4718]: I0123 16:37:45.814793 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pltf\" (UniqueName: \"kubernetes.io/projected/fdefc9e4-c27d-4b53-a75a-5a74124d31f2-kube-api-access-7pltf\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:46 crc kubenswrapper[4718]: I0123 16:37:46.176753 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vbzkx" event={"ID":"2e4da531-c3ee-4c93-8de9-60f9c0ce9858","Type":"ContainerDied","Data":"5cb4bdc41934b52e7663df2b906ea5607abeaa30bc7504bc3e6956e9dfbbbfd1"} Jan 23 16:37:46 crc kubenswrapper[4718]: I0123 16:37:46.177323 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cb4bdc41934b52e7663df2b906ea5607abeaa30bc7504bc3e6956e9dfbbbfd1" Jan 23 16:37:46 crc kubenswrapper[4718]: I0123 16:37:46.176855 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vbzkx" Jan 23 16:37:46 crc kubenswrapper[4718]: I0123 16:37:46.182442 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cl7vm" event={"ID":"fdefc9e4-c27d-4b53-a75a-5a74124d31f2","Type":"ContainerDied","Data":"e82afc8df14bff7a3dd8ba613bd021e04f4614baba5249cc86b63639a0f58611"} Jan 23 16:37:46 crc kubenswrapper[4718]: I0123 16:37:46.182490 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e82afc8df14bff7a3dd8ba613bd021e04f4614baba5249cc86b63639a0f58611" Jan 23 16:37:46 crc kubenswrapper[4718]: I0123 16:37:46.182535 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cl7vm" Jan 23 16:37:46 crc kubenswrapper[4718]: E0123 16:37:46.328800 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdefc9e4_c27d_4b53_a75a_5a74124d31f2.slice/crio-e82afc8df14bff7a3dd8ba613bd021e04f4614baba5249cc86b63639a0f58611\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdefc9e4_c27d_4b53_a75a_5a74124d31f2.slice\": RecentStats: unable to find data in memory cache]" Jan 23 16:37:46 crc kubenswrapper[4718]: I0123 16:37:46.928893 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-drzng"] Jan 23 16:37:46 crc kubenswrapper[4718]: E0123 16:37:46.929452 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdefc9e4-c27d-4b53-a75a-5a74124d31f2" containerName="glance-db-sync" Jan 23 16:37:46 crc kubenswrapper[4718]: I0123 16:37:46.929468 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdefc9e4-c27d-4b53-a75a-5a74124d31f2" containerName="glance-db-sync" Jan 23 16:37:46 crc kubenswrapper[4718]: E0123 16:37:46.929480 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4da531-c3ee-4c93-8de9-60f9c0ce9858" containerName="neutron-db-sync" Jan 23 16:37:46 crc kubenswrapper[4718]: I0123 16:37:46.929486 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4da531-c3ee-4c93-8de9-60f9c0ce9858" containerName="neutron-db-sync" Jan 23 16:37:46 crc kubenswrapper[4718]: I0123 16:37:46.929738 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdefc9e4-c27d-4b53-a75a-5a74124d31f2" containerName="glance-db-sync" Jan 23 16:37:46 crc kubenswrapper[4718]: I0123 16:37:46.929759 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e4da531-c3ee-4c93-8de9-60f9c0ce9858" containerName="neutron-db-sync" Jan 23 16:37:46 crc kubenswrapper[4718]: I0123 16:37:46.930983 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:46 crc kubenswrapper[4718]: I0123 16:37:46.965737 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-drzng"] Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.023464 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-79d7cd7f96-8w9s4"] Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.026188 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.031102 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nrl7q" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.031359 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.031471 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.037285 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.072848 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-dns-svc\") pod \"dnsmasq-dns-79cd4f6685-drzng\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.072911 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-dns-swift-storage-0\") pod \"dnsmasq-dns-79cd4f6685-drzng\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.072953 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-ovsdbserver-sb\") pod \"dnsmasq-dns-79cd4f6685-drzng\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.072994 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-ovsdbserver-nb\") pod \"dnsmasq-dns-79cd4f6685-drzng\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.073035 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-config\") pod \"dnsmasq-dns-79cd4f6685-drzng\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.073104 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmppl\" (UniqueName: \"kubernetes.io/projected/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-kube-api-access-gmppl\") pod \"dnsmasq-dns-79cd4f6685-drzng\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.078054 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-79d7cd7f96-8w9s4"] Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.177269 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-ovndb-tls-certs\") pod \"neutron-79d7cd7f96-8w9s4\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.177325 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-ovsdbserver-nb\") pod \"dnsmasq-dns-79cd4f6685-drzng\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.177369 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-config\") pod \"dnsmasq-dns-79cd4f6685-drzng\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.177401 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-httpd-config\") pod \"neutron-79d7cd7f96-8w9s4\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.177441 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmppl\" (UniqueName: \"kubernetes.io/projected/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-kube-api-access-gmppl\") pod \"dnsmasq-dns-79cd4f6685-drzng\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.177483 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nfkt\" (UniqueName: \"kubernetes.io/projected/5ce5910e-d662-4c66-a349-684b2d98509c-kube-api-access-6nfkt\") pod \"neutron-79d7cd7f96-8w9s4\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.177534 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-combined-ca-bundle\") pod \"neutron-79d7cd7f96-8w9s4\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.177572 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-dns-svc\") pod \"dnsmasq-dns-79cd4f6685-drzng\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.177593 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-config\") pod \"neutron-79d7cd7f96-8w9s4\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.177614 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-dns-swift-storage-0\") pod \"dnsmasq-dns-79cd4f6685-drzng\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.177665 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-ovsdbserver-sb\") pod \"dnsmasq-dns-79cd4f6685-drzng\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.179053 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-ovsdbserver-nb\") pod \"dnsmasq-dns-79cd4f6685-drzng\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.179286 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-dns-svc\") pod \"dnsmasq-dns-79cd4f6685-drzng\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.179580 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-config\") pod \"dnsmasq-dns-79cd4f6685-drzng\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.180336 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-dns-swift-storage-0\") pod \"dnsmasq-dns-79cd4f6685-drzng\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.181813 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-ovsdbserver-sb\") pod \"dnsmasq-dns-79cd4f6685-drzng\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.204117 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmppl\" (UniqueName: \"kubernetes.io/projected/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-kube-api-access-gmppl\") pod \"dnsmasq-dns-79cd4f6685-drzng\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.273475 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-drzng"] Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.274001 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.291511 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-ovndb-tls-certs\") pod \"neutron-79d7cd7f96-8w9s4\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.291671 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-httpd-config\") pod \"neutron-79d7cd7f96-8w9s4\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.291801 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nfkt\" (UniqueName: \"kubernetes.io/projected/5ce5910e-d662-4c66-a349-684b2d98509c-kube-api-access-6nfkt\") pod \"neutron-79d7cd7f96-8w9s4\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.291892 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-combined-ca-bundle\") pod \"neutron-79d7cd7f96-8w9s4\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.291970 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-config\") pod \"neutron-79d7cd7f96-8w9s4\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.312580 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-combined-ca-bundle\") pod \"neutron-79d7cd7f96-8w9s4\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.329796 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-config\") pod \"neutron-79d7cd7f96-8w9s4\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.340347 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-httpd-config\") pod \"neutron-79d7cd7f96-8w9s4\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.345297 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-ovndb-tls-certs\") pod \"neutron-79d7cd7f96-8w9s4\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.360196 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-qlsl5"] Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.362582 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.368480 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nfkt\" (UniqueName: \"kubernetes.io/projected/5ce5910e-d662-4c66-a349-684b2d98509c-kube-api-access-6nfkt\") pod \"neutron-79d7cd7f96-8w9s4\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.377605 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.390980 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-qlsl5"] Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.514090 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-qlsl5\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.514167 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-config\") pod \"dnsmasq-dns-6b7b667979-qlsl5\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.514234 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spc8g\" (UniqueName: \"kubernetes.io/projected/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-kube-api-access-spc8g\") pod \"dnsmasq-dns-6b7b667979-qlsl5\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.514269 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-dns-svc\") pod \"dnsmasq-dns-6b7b667979-qlsl5\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.514340 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-qlsl5\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.514373 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-qlsl5\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.614879 4718 scope.go:117] "RemoveContainer" containerID="26883940149ea9d607e83198a2f05bcbc25535297c68d9ed2a74bdaba158baa8" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.620183 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-qlsl5\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.620257 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-config\") pod \"dnsmasq-dns-6b7b667979-qlsl5\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.620332 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spc8g\" (UniqueName: \"kubernetes.io/projected/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-kube-api-access-spc8g\") pod \"dnsmasq-dns-6b7b667979-qlsl5\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.620370 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-dns-svc\") pod \"dnsmasq-dns-6b7b667979-qlsl5\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.620440 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-qlsl5\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.620480 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-qlsl5\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.621397 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-qlsl5\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.621976 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-qlsl5\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.622460 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-config\") pod \"dnsmasq-dns-6b7b667979-qlsl5\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.623373 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-dns-svc\") pod \"dnsmasq-dns-6b7b667979-qlsl5\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.624192 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-qlsl5\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.665058 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spc8g\" (UniqueName: \"kubernetes.io/projected/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-kube-api-access-spc8g\") pod \"dnsmasq-dns-6b7b667979-qlsl5\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: E0123 16:37:47.683437 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 23 16:37:47 crc kubenswrapper[4718]: E0123 16:37:47.683676 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxc7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-b4crv_openstack(0bd0aeb0-24b5-49c2-96cd-83f9defa05e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:37:47 crc kubenswrapper[4718]: E0123 16:37:47.684818 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-b4crv" podUID="0bd0aeb0-24b5-49c2-96cd-83f9defa05e1" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.716267 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.755908 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.796688 4718 scope.go:117] "RemoveContainer" containerID="2b5e388021705635647ef7b599adeea4f6f9cd2443cb6248e0585206cc3339ca" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.835035 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-config\") pod \"9914de17-33e2-4fea-a394-da364f4d8b43\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.835598 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbpf8\" (UniqueName: \"kubernetes.io/projected/9914de17-33e2-4fea-a394-da364f4d8b43-kube-api-access-dbpf8\") pod \"9914de17-33e2-4fea-a394-da364f4d8b43\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.835754 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-ovsdbserver-sb\") pod \"9914de17-33e2-4fea-a394-da364f4d8b43\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.835927 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-dns-svc\") pod \"9914de17-33e2-4fea-a394-da364f4d8b43\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.836069 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-ovsdbserver-nb\") pod \"9914de17-33e2-4fea-a394-da364f4d8b43\" (UID: \"9914de17-33e2-4fea-a394-da364f4d8b43\") " Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.895470 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9914de17-33e2-4fea-a394-da364f4d8b43-kube-api-access-dbpf8" (OuterVolumeSpecName: "kube-api-access-dbpf8") pod "9914de17-33e2-4fea-a394-da364f4d8b43" (UID: "9914de17-33e2-4fea-a394-da364f4d8b43"). InnerVolumeSpecName "kube-api-access-dbpf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.951438 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9914de17-33e2-4fea-a394-da364f4d8b43" (UID: "9914de17-33e2-4fea-a394-da364f4d8b43"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.967691 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:47 crc kubenswrapper[4718]: I0123 16:37:47.967741 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbpf8\" (UniqueName: \"kubernetes.io/projected/9914de17-33e2-4fea-a394-da364f4d8b43-kube-api-access-dbpf8\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.029544 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-config" (OuterVolumeSpecName: "config") pod "9914de17-33e2-4fea-a394-da364f4d8b43" (UID: "9914de17-33e2-4fea-a394-da364f4d8b43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.041231 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9914de17-33e2-4fea-a394-da364f4d8b43" (UID: "9914de17-33e2-4fea-a394-da364f4d8b43"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.067464 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9914de17-33e2-4fea-a394-da364f4d8b43" (UID: "9914de17-33e2-4fea-a394-da364f4d8b43"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.070120 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.070158 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.070168 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9914de17-33e2-4fea-a394-da364f4d8b43-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.245327 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" event={"ID":"9914de17-33e2-4fea-a394-da364f4d8b43","Type":"ContainerDied","Data":"ecd1792be5cf5217099a4af524b7c7059934ea00d20eeac89f17ad22d4c7ad2d"} Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.245461 4718 scope.go:117] "RemoveContainer" containerID="e6b1e90a9e018b18b4c1a13925be18abc569ba987a0407d362a78180763c5032" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.245608 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.289058 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 16:37:48 crc kubenswrapper[4718]: E0123 16:37:48.289972 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9914de17-33e2-4fea-a394-da364f4d8b43" containerName="dnsmasq-dns" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.289986 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9914de17-33e2-4fea-a394-da364f4d8b43" containerName="dnsmasq-dns" Jan 23 16:37:48 crc kubenswrapper[4718]: E0123 16:37:48.290010 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9914de17-33e2-4fea-a394-da364f4d8b43" containerName="init" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.290017 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9914de17-33e2-4fea-a394-da364f4d8b43" containerName="init" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.291510 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="9914de17-33e2-4fea-a394-da364f4d8b43" containerName="dnsmasq-dns" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.303393 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.317492 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.317907 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.318016 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zxlkc" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.322952 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.326031 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.330134 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.375770 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.456340 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-k9cg2"] Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.481188 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-logs\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.481427 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.481562 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-scripts\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.481661 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-logs\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.481745 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.481862 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.481977 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.482077 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.482214 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-config-data\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.482768 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv9kb\" (UniqueName: \"kubernetes.io/projected/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-kube-api-access-bv9kb\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.482900 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.483019 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.483157 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8rbs\" (UniqueName: \"kubernetes.io/projected/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-kube-api-access-d8rbs\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.483280 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.498290 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.539116 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-r87qz"] Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.548791 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-r87qz"] Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.558936 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9c6mc"] Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.587432 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-logs\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.587492 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.587581 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-scripts\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.588059 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-logs\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.588288 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.588409 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.588455 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.588477 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.588684 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-config-data\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.588728 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv9kb\" (UniqueName: \"kubernetes.io/projected/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-kube-api-access-bv9kb\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.588789 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.588837 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.588903 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-logs\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.588072 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-logs\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.588945 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8rbs\" (UniqueName: \"kubernetes.io/projected/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-kube-api-access-d8rbs\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.589032 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.590027 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.590266 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.603404 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.603536 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.603893 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6fdb96493887a375a1e9d3a0dda74de9dd624a402ba78fff08bb357f5ac00041/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.603599 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.603823 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.604060 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/41586b6fcb1f91b026169314f421fada9e29acb3bb28133ae98a72e7e358c6a3/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.603619 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.604136 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-scripts\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.606845 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-config-data\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.611070 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv9kb\" (UniqueName: \"kubernetes.io/projected/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-kube-api-access-bv9kb\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.645939 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.646185 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8rbs\" (UniqueName: \"kubernetes.io/projected/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-kube-api-access-d8rbs\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.668268 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-qlsl5"] Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.702683 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\") pod \"glance-default-external-api-0\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.716143 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-79d7cd7f96-8w9s4"] Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.719943 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") pod \"glance-default-internal-api-0\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: E0123 16:37:48.736399 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-b4crv" podUID="0bd0aeb0-24b5-49c2-96cd-83f9defa05e1" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.767155 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-drzng"] Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.773147 4718 scope.go:117] "RemoveContainer" containerID="d9014ad971d16a3996ce4065fd153bd2e7f43dbe6496ac69144209a47dec9109" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.782096 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.782134 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.969770 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 16:37:48 crc kubenswrapper[4718]: I0123 16:37:48.997264 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 16:37:49 crc kubenswrapper[4718]: I0123 16:37:49.255681 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9914de17-33e2-4fea-a394-da364f4d8b43" path="/var/lib/kubelet/pods/9914de17-33e2-4fea-a394-da364f4d8b43/volumes" Jan 23 16:37:49 crc kubenswrapper[4718]: I0123 16:37:49.290524 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cd4f6685-drzng" event={"ID":"acc48fe8-92cc-45ee-a4c6-12dc5aade07f","Type":"ContainerStarted","Data":"c25b83f03178e13b273927cfb984cbcf6257fa01aa834b5e0fc1e331822d7936"} Jan 23 16:37:49 crc kubenswrapper[4718]: I0123 16:37:49.300740 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9c6mc" event={"ID":"27796574-e773-413a-9d32-beb6e99cd093","Type":"ContainerStarted","Data":"c42c7af7c93eccb504889a28e338dd54526af7dd6023c4208e05e17877c30840"} Jan 23 16:37:49 crc kubenswrapper[4718]: I0123 16:37:49.316439 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k9cg2" event={"ID":"272fb7e3-72ae-4687-9d58-88f35cdb18e2","Type":"ContainerStarted","Data":"9f19080e0a3fdf8c6b7f462148660f5a5e5743fce651d56f0869aa622f86b93b"} Jan 23 16:37:49 crc kubenswrapper[4718]: I0123 16:37:49.333920 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" event={"ID":"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2","Type":"ContainerStarted","Data":"28a6a3fabf0cfd8b569a6b473c789ccbc13649c5412be464b5f4578e110d894c"} Jan 23 16:37:49 crc kubenswrapper[4718]: I0123 16:37:49.336248 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79d7cd7f96-8w9s4" event={"ID":"5ce5910e-d662-4c66-a349-684b2d98509c","Type":"ContainerStarted","Data":"5d8c0dbc7aabe3df91ed1665d0c3f52215dad4fbafb47f7da9dbdcbb55252684"} Jan 23 16:37:50 crc kubenswrapper[4718]: I0123 16:37:50.128532 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 16:37:50 crc kubenswrapper[4718]: I0123 16:37:50.209433 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 16:37:50 crc kubenswrapper[4718]: I0123 16:37:50.359019 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79d7cd7f96-8w9s4" event={"ID":"5ce5910e-d662-4c66-a349-684b2d98509c","Type":"ContainerStarted","Data":"208e15d0e6f071083c884987d678627e340c6690645f9772f8c5aef7aa6fc9d3"} Jan 23 16:37:50 crc kubenswrapper[4718]: I0123 16:37:50.365057 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cd4f6685-drzng" event={"ID":"acc48fe8-92cc-45ee-a4c6-12dc5aade07f","Type":"ContainerStarted","Data":"fea3ac44c222c310c60adfe64bf2ac0cead5fdf4abf993e91ef656d59f76e220"} Jan 23 16:37:50 crc kubenswrapper[4718]: I0123 16:37:50.365267 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79cd4f6685-drzng" podUID="acc48fe8-92cc-45ee-a4c6-12dc5aade07f" containerName="init" containerID="cri-o://fea3ac44c222c310c60adfe64bf2ac0cead5fdf4abf993e91ef656d59f76e220" gracePeriod=10 Jan 23 16:37:50 crc kubenswrapper[4718]: I0123 16:37:50.368933 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"23f6a937-f508-45a9-a8cd-c5e6527f0e1d","Type":"ContainerStarted","Data":"27c039fa09b62a61bdd190e7c9d091bc2a05815692130e53524f0384458018f8"} Jan 23 16:37:50 crc kubenswrapper[4718]: I0123 16:37:50.370335 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9c6mc" event={"ID":"27796574-e773-413a-9d32-beb6e99cd093","Type":"ContainerStarted","Data":"3f1d84f4b2e206b1284b0c4c3cdf79633cf179efe2ff95cac6f68505a6ff1245"} Jan 23 16:37:50 crc kubenswrapper[4718]: I0123 16:37:50.373531 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" event={"ID":"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2","Type":"ContainerStarted","Data":"928429bb3c74865f885d4de4c4a54e632febd49fe18c86078c4044af55f02198"} Jan 23 16:37:50 crc kubenswrapper[4718]: I0123 16:37:50.375697 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bvdgb" event={"ID":"beac9063-62c9-4cb1-aa45-786d02b1e9db","Type":"ContainerStarted","Data":"d3d02f885ca0ab08a56d12705e8fc0fd29afb1b9820ba4c7b07312b64139eb4b"} Jan 23 16:37:50 crc kubenswrapper[4718]: I0123 16:37:50.381580 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1d5cdeed-5542-49ea-ae74-1272ec4f60b3","Type":"ContainerStarted","Data":"c102234c4c2e7e3c53948fc339a13c7894c8f75b83c2d8e99de871aa3498f420"} Jan 23 16:37:50 crc kubenswrapper[4718]: I0123 16:37:50.420030 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-bvdgb" podStartSLOduration=8.37532978 podStartE2EDuration="34.420012022s" podCreationTimestamp="2026-01-23 16:37:16 +0000 UTC" firstStartedPulling="2026-01-23 16:37:19.489527003 +0000 UTC m=+1240.636768984" lastFinishedPulling="2026-01-23 16:37:45.534209235 +0000 UTC m=+1266.681451226" observedRunningTime="2026-01-23 16:37:50.417645009 +0000 UTC m=+1271.564887010" watchObservedRunningTime="2026-01-23 16:37:50.420012022 +0000 UTC m=+1271.567254013" Jan 23 16:37:50 crc kubenswrapper[4718]: I0123 16:37:50.899911 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.010659 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-ovsdbserver-nb\") pod \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.010719 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-ovsdbserver-sb\") pod \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.010774 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmppl\" (UniqueName: \"kubernetes.io/projected/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-kube-api-access-gmppl\") pod \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.011277 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-config\") pod \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.011379 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-dns-svc\") pod \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.011417 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-dns-swift-storage-0\") pod \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\" (UID: \"acc48fe8-92cc-45ee-a4c6-12dc5aade07f\") " Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.016520 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-kube-api-access-gmppl" (OuterVolumeSpecName: "kube-api-access-gmppl") pod "acc48fe8-92cc-45ee-a4c6-12dc5aade07f" (UID: "acc48fe8-92cc-45ee-a4c6-12dc5aade07f"). InnerVolumeSpecName "kube-api-access-gmppl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.062007 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-config" (OuterVolumeSpecName: "config") pod "acc48fe8-92cc-45ee-a4c6-12dc5aade07f" (UID: "acc48fe8-92cc-45ee-a4c6-12dc5aade07f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.069150 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "acc48fe8-92cc-45ee-a4c6-12dc5aade07f" (UID: "acc48fe8-92cc-45ee-a4c6-12dc5aade07f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.116126 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmppl\" (UniqueName: \"kubernetes.io/projected/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-kube-api-access-gmppl\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.116161 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.116175 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.121991 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "acc48fe8-92cc-45ee-a4c6-12dc5aade07f" (UID: "acc48fe8-92cc-45ee-a4c6-12dc5aade07f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.134182 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "acc48fe8-92cc-45ee-a4c6-12dc5aade07f" (UID: "acc48fe8-92cc-45ee-a4c6-12dc5aade07f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.176500 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "acc48fe8-92cc-45ee-a4c6-12dc5aade07f" (UID: "acc48fe8-92cc-45ee-a4c6-12dc5aade07f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.218227 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.218265 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.218276 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acc48fe8-92cc-45ee-a4c6-12dc5aade07f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.402571 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xpbng" event={"ID":"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c","Type":"ContainerStarted","Data":"71cf5ab0e0d3b8ffb7793dbe9fbdb8fe1df804b842806e3b6f135145a12b8a29"} Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.409751 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79d7cd7f96-8w9s4" event={"ID":"5ce5910e-d662-4c66-a349-684b2d98509c","Type":"ContainerStarted","Data":"861b8bc546231af488bdfaa720f2977ae98838f84ff3b3e72bc1366bc74f1e5a"} Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.410542 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.414271 4718 generic.go:334] "Generic (PLEG): container finished" podID="acc48fe8-92cc-45ee-a4c6-12dc5aade07f" containerID="fea3ac44c222c310c60adfe64bf2ac0cead5fdf4abf993e91ef656d59f76e220" exitCode=0 Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.414351 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cd4f6685-drzng" event={"ID":"acc48fe8-92cc-45ee-a4c6-12dc5aade07f","Type":"ContainerDied","Data":"fea3ac44c222c310c60adfe64bf2ac0cead5fdf4abf993e91ef656d59f76e220"} Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.414380 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cd4f6685-drzng" event={"ID":"acc48fe8-92cc-45ee-a4c6-12dc5aade07f","Type":"ContainerDied","Data":"c25b83f03178e13b273927cfb984cbcf6257fa01aa834b5e0fc1e331822d7936"} Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.414435 4718 scope.go:117] "RemoveContainer" containerID="fea3ac44c222c310c60adfe64bf2ac0cead5fdf4abf993e91ef656d59f76e220" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.414574 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cd4f6685-drzng" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.431981 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"23f6a937-f508-45a9-a8cd-c5e6527f0e1d","Type":"ContainerStarted","Data":"6e660c5343d5ee2485c72cbfee7c9261cf363561167b4c7bf183110ff6db4c13"} Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.436680 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-xpbng" podStartSLOduration=4.214995841 podStartE2EDuration="35.436667242s" podCreationTimestamp="2026-01-23 16:37:16 +0000 UTC" firstStartedPulling="2026-01-23 16:37:19.528905422 +0000 UTC m=+1240.676147413" lastFinishedPulling="2026-01-23 16:37:50.750576823 +0000 UTC m=+1271.897818814" observedRunningTime="2026-01-23 16:37:51.432189341 +0000 UTC m=+1272.579431332" watchObservedRunningTime="2026-01-23 16:37:51.436667242 +0000 UTC m=+1272.583909233" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.441543 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-q47zv" event={"ID":"a3d57ff1-707c-4dd4-8922-1d910f52faf8","Type":"ContainerStarted","Data":"2db9cf20fa6b20c906e87b700696cf5dc6e88650dd35a9572ed443c563377bed"} Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.460831 4718 generic.go:334] "Generic (PLEG): container finished" podID="272fb7e3-72ae-4687-9d58-88f35cdb18e2" containerID="809fe8a28e6ad8ef22ff17b1650f5d8ae1d7cdf418842346d4d245a41ba2ffc4" exitCode=0 Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.460960 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k9cg2" event={"ID":"272fb7e3-72ae-4687-9d58-88f35cdb18e2","Type":"ContainerDied","Data":"809fe8a28e6ad8ef22ff17b1650f5d8ae1d7cdf418842346d4d245a41ba2ffc4"} Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.492427 4718 generic.go:334] "Generic (PLEG): container finished" podID="318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2" containerID="928429bb3c74865f885d4de4c4a54e632febd49fe18c86078c4044af55f02198" exitCode=0 Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.492547 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" event={"ID":"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2","Type":"ContainerDied","Data":"928429bb3c74865f885d4de4c4a54e632febd49fe18c86078c4044af55f02198"} Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.498915 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-79d7cd7f96-8w9s4" podStartSLOduration=5.498888981 podStartE2EDuration="5.498888981s" podCreationTimestamp="2026-01-23 16:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:37:51.464433175 +0000 UTC m=+1272.611675166" watchObservedRunningTime="2026-01-23 16:37:51.498888981 +0000 UTC m=+1272.646130972" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.532240 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1d5cdeed-5542-49ea-ae74-1272ec4f60b3","Type":"ContainerStarted","Data":"1ae7968bf6684799caaebf54ffd31d959eaab938e800a2b656112bff7c44ce7b"} Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.592381 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-q47zv" podStartSLOduration=4.419942682 podStartE2EDuration="35.592354467s" podCreationTimestamp="2026-01-23 16:37:16 +0000 UTC" firstStartedPulling="2026-01-23 16:37:19.400199699 +0000 UTC m=+1240.547441690" lastFinishedPulling="2026-01-23 16:37:50.572611484 +0000 UTC m=+1271.719853475" observedRunningTime="2026-01-23 16:37:51.526216642 +0000 UTC m=+1272.673458633" watchObservedRunningTime="2026-01-23 16:37:51.592354467 +0000 UTC m=+1272.739596458" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.632769 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-r87qz" podUID="9914de17-33e2-4fea-a394-da364f4d8b43" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: i/o timeout" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.715723 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-drzng"] Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.732537 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-drzng"] Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.772891 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-9c6mc" podStartSLOduration=15.772863075 podStartE2EDuration="15.772863075s" podCreationTimestamp="2026-01-23 16:37:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:37:51.658482192 +0000 UTC m=+1272.805724183" watchObservedRunningTime="2026-01-23 16:37:51.772863075 +0000 UTC m=+1272.920105076" Jan 23 16:37:51 crc kubenswrapper[4718]: I0123 16:37:51.975835 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 16:37:52 crc kubenswrapper[4718]: I0123 16:37:52.082903 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 16:37:52 crc kubenswrapper[4718]: I0123 16:37:52.507903 4718 scope.go:117] "RemoveContainer" containerID="fea3ac44c222c310c60adfe64bf2ac0cead5fdf4abf993e91ef656d59f76e220" Jan 23 16:37:52 crc kubenswrapper[4718]: E0123 16:37:52.514560 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fea3ac44c222c310c60adfe64bf2ac0cead5fdf4abf993e91ef656d59f76e220\": container with ID starting with fea3ac44c222c310c60adfe64bf2ac0cead5fdf4abf993e91ef656d59f76e220 not found: ID does not exist" containerID="fea3ac44c222c310c60adfe64bf2ac0cead5fdf4abf993e91ef656d59f76e220" Jan 23 16:37:52 crc kubenswrapper[4718]: I0123 16:37:52.514621 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea3ac44c222c310c60adfe64bf2ac0cead5fdf4abf993e91ef656d59f76e220"} err="failed to get container status \"fea3ac44c222c310c60adfe64bf2ac0cead5fdf4abf993e91ef656d59f76e220\": rpc error: code = NotFound desc = could not find container \"fea3ac44c222c310c60adfe64bf2ac0cead5fdf4abf993e91ef656d59f76e220\": container with ID starting with fea3ac44c222c310c60adfe64bf2ac0cead5fdf4abf993e91ef656d59f76e220 not found: ID does not exist" Jan 23 16:37:52 crc kubenswrapper[4718]: I0123 16:37:52.569081 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1d5cdeed-5542-49ea-ae74-1272ec4f60b3","Type":"ContainerStarted","Data":"018925fd1e0271fed1d0e3af3b08c0304283b716135848e08fdcb4319f80ddd2"} Jan 23 16:37:52 crc kubenswrapper[4718]: I0123 16:37:52.569493 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1d5cdeed-5542-49ea-ae74-1272ec4f60b3" containerName="glance-log" containerID="cri-o://1ae7968bf6684799caaebf54ffd31d959eaab938e800a2b656112bff7c44ce7b" gracePeriod=30 Jan 23 16:37:52 crc kubenswrapper[4718]: I0123 16:37:52.569778 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1d5cdeed-5542-49ea-ae74-1272ec4f60b3" containerName="glance-httpd" containerID="cri-o://018925fd1e0271fed1d0e3af3b08c0304283b716135848e08fdcb4319f80ddd2" gracePeriod=30 Jan 23 16:37:52 crc kubenswrapper[4718]: I0123 16:37:52.584475 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="23f6a937-f508-45a9-a8cd-c5e6527f0e1d" containerName="glance-log" containerID="cri-o://6e660c5343d5ee2485c72cbfee7c9261cf363561167b4c7bf183110ff6db4c13" gracePeriod=30 Jan 23 16:37:52 crc kubenswrapper[4718]: I0123 16:37:52.584589 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"23f6a937-f508-45a9-a8cd-c5e6527f0e1d","Type":"ContainerStarted","Data":"5b59db07c56271931e379276176a242c2c7ec1b18240cb036f9229078eddf734"} Jan 23 16:37:52 crc kubenswrapper[4718]: I0123 16:37:52.585859 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="23f6a937-f508-45a9-a8cd-c5e6527f0e1d" containerName="glance-httpd" containerID="cri-o://5b59db07c56271931e379276176a242c2c7ec1b18240cb036f9229078eddf734" gracePeriod=30 Jan 23 16:37:52 crc kubenswrapper[4718]: I0123 16:37:52.639811 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.639783782 podStartE2EDuration="5.639783782s" podCreationTimestamp="2026-01-23 16:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:37:52.597994798 +0000 UTC m=+1273.745236789" watchObservedRunningTime="2026-01-23 16:37:52.639783782 +0000 UTC m=+1273.787025773" Jan 23 16:37:52 crc kubenswrapper[4718]: I0123 16:37:52.657715 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.657685257 podStartE2EDuration="5.657685257s" podCreationTimestamp="2026-01-23 16:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:37:52.626162452 +0000 UTC m=+1273.773404443" watchObservedRunningTime="2026-01-23 16:37:52.657685257 +0000 UTC m=+1273.804927248" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.095856 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k9cg2" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.164200 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acc48fe8-92cc-45ee-a4c6-12dc5aade07f" path="/var/lib/kubelet/pods/acc48fe8-92cc-45ee-a4c6-12dc5aade07f/volumes" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.205112 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/272fb7e3-72ae-4687-9d58-88f35cdb18e2-operator-scripts\") pod \"272fb7e3-72ae-4687-9d58-88f35cdb18e2\" (UID: \"272fb7e3-72ae-4687-9d58-88f35cdb18e2\") " Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.205197 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbqkm\" (UniqueName: \"kubernetes.io/projected/272fb7e3-72ae-4687-9d58-88f35cdb18e2-kube-api-access-sbqkm\") pod \"272fb7e3-72ae-4687-9d58-88f35cdb18e2\" (UID: \"272fb7e3-72ae-4687-9d58-88f35cdb18e2\") " Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.210494 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/272fb7e3-72ae-4687-9d58-88f35cdb18e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "272fb7e3-72ae-4687-9d58-88f35cdb18e2" (UID: "272fb7e3-72ae-4687-9d58-88f35cdb18e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.214464 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/272fb7e3-72ae-4687-9d58-88f35cdb18e2-kube-api-access-sbqkm" (OuterVolumeSpecName: "kube-api-access-sbqkm") pod "272fb7e3-72ae-4687-9d58-88f35cdb18e2" (UID: "272fb7e3-72ae-4687-9d58-88f35cdb18e2"). InnerVolumeSpecName "kube-api-access-sbqkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.308916 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/272fb7e3-72ae-4687-9d58-88f35cdb18e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.308957 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbqkm\" (UniqueName: \"kubernetes.io/projected/272fb7e3-72ae-4687-9d58-88f35cdb18e2-kube-api-access-sbqkm\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.406249 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.491561 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7cf46bcc5-bh2nc"] Jan 23 16:37:53 crc kubenswrapper[4718]: E0123 16:37:53.492179 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc48fe8-92cc-45ee-a4c6-12dc5aade07f" containerName="init" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.492192 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc48fe8-92cc-45ee-a4c6-12dc5aade07f" containerName="init" Jan 23 16:37:53 crc kubenswrapper[4718]: E0123 16:37:53.492209 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272fb7e3-72ae-4687-9d58-88f35cdb18e2" containerName="mariadb-account-create-update" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.492215 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="272fb7e3-72ae-4687-9d58-88f35cdb18e2" containerName="mariadb-account-create-update" Jan 23 16:37:53 crc kubenswrapper[4718]: E0123 16:37:53.492242 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f6a937-f508-45a9-a8cd-c5e6527f0e1d" containerName="glance-httpd" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.492252 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f6a937-f508-45a9-a8cd-c5e6527f0e1d" containerName="glance-httpd" Jan 23 16:37:53 crc kubenswrapper[4718]: E0123 16:37:53.492299 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f6a937-f508-45a9-a8cd-c5e6527f0e1d" containerName="glance-log" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.492306 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f6a937-f508-45a9-a8cd-c5e6527f0e1d" containerName="glance-log" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.492503 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f6a937-f508-45a9-a8cd-c5e6527f0e1d" containerName="glance-httpd" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.492513 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="acc48fe8-92cc-45ee-a4c6-12dc5aade07f" containerName="init" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.492532 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="272fb7e3-72ae-4687-9d58-88f35cdb18e2" containerName="mariadb-account-create-update" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.492545 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f6a937-f508-45a9-a8cd-c5e6527f0e1d" containerName="glance-log" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.493756 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.499598 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.500569 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.512759 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-config-data\") pod \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.512897 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-scripts\") pod \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.513022 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv9kb\" (UniqueName: \"kubernetes.io/projected/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-kube-api-access-bv9kb\") pod \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.513124 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-logs\") pod \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.513289 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") pod \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.513395 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-httpd-run\") pod \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.513475 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-combined-ca-bundle\") pod \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.515201 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-logs" (OuterVolumeSpecName: "logs") pod "23f6a937-f508-45a9-a8cd-c5e6527f0e1d" (UID: "23f6a937-f508-45a9-a8cd-c5e6527f0e1d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.516294 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "23f6a937-f508-45a9-a8cd-c5e6527f0e1d" (UID: "23f6a937-f508-45a9-a8cd-c5e6527f0e1d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.524341 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-kube-api-access-bv9kb" (OuterVolumeSpecName: "kube-api-access-bv9kb") pod "23f6a937-f508-45a9-a8cd-c5e6527f0e1d" (UID: "23f6a937-f508-45a9-a8cd-c5e6527f0e1d"). InnerVolumeSpecName "kube-api-access-bv9kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.528652 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-scripts" (OuterVolumeSpecName: "scripts") pod "23f6a937-f508-45a9-a8cd-c5e6527f0e1d" (UID: "23f6a937-f508-45a9-a8cd-c5e6527f0e1d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.521330 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7cf46bcc5-bh2nc"] Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.522608 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-logs\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.579462 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23f6a937-f508-45a9-a8cd-c5e6527f0e1d" (UID: "23f6a937-f508-45a9-a8cd-c5e6527f0e1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:53 crc kubenswrapper[4718]: E0123 16:37:53.597566 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8 podName:23f6a937-f508-45a9-a8cd-c5e6527f0e1d nodeName:}" failed. No retries permitted until 2026-01-23 16:37:54.097540133 +0000 UTC m=+1275.244782124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "glance" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8") pod "23f6a937-f508-45a9-a8cd-c5e6527f0e1d" (UID: "23f6a937-f508-45a9-a8cd-c5e6527f0e1d") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.626328 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-config-data" (OuterVolumeSpecName: "config-data") pod "23f6a937-f508-45a9-a8cd-c5e6527f0e1d" (UID: "23f6a937-f508-45a9-a8cd-c5e6527f0e1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.634757 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-combined-ca-bundle\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.634992 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-config\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.635111 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-internal-tls-certs\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.635260 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-httpd-config\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.635350 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-ovndb-tls-certs\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.639828 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-public-tls-certs\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.639971 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frjk2\" (UniqueName: \"kubernetes.io/projected/21c0d3dd-fddc-4460-9bf6-89df19751954-kube-api-access-frjk2\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.640244 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.640354 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.640412 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.640484 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv9kb\" (UniqueName: \"kubernetes.io/projected/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-kube-api-access-bv9kb\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.640544 4718 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/23f6a937-f508-45a9-a8cd-c5e6527f0e1d-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.645311 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" event={"ID":"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2","Type":"ContainerStarted","Data":"3f51d37a677af322d8044a539b835c2187199fc463b531c7b5043dbfd8dd759c"} Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.650862 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.659537 4718 generic.go:334] "Generic (PLEG): container finished" podID="1d5cdeed-5542-49ea-ae74-1272ec4f60b3" containerID="018925fd1e0271fed1d0e3af3b08c0304283b716135848e08fdcb4319f80ddd2" exitCode=143 Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.659572 4718 generic.go:334] "Generic (PLEG): container finished" podID="1d5cdeed-5542-49ea-ae74-1272ec4f60b3" containerID="1ae7968bf6684799caaebf54ffd31d959eaab938e800a2b656112bff7c44ce7b" exitCode=143 Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.659593 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1d5cdeed-5542-49ea-ae74-1272ec4f60b3","Type":"ContainerDied","Data":"018925fd1e0271fed1d0e3af3b08c0304283b716135848e08fdcb4319f80ddd2"} Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.659698 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1d5cdeed-5542-49ea-ae74-1272ec4f60b3","Type":"ContainerDied","Data":"1ae7968bf6684799caaebf54ffd31d959eaab938e800a2b656112bff7c44ce7b"} Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.659713 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1d5cdeed-5542-49ea-ae74-1272ec4f60b3","Type":"ContainerDied","Data":"c102234c4c2e7e3c53948fc339a13c7894c8f75b83c2d8e99de871aa3498f420"} Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.659729 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c102234c4c2e7e3c53948fc339a13c7894c8f75b83c2d8e99de871aa3498f420" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.675073 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" podStartSLOduration=6.675056316 podStartE2EDuration="6.675056316s" podCreationTimestamp="2026-01-23 16:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:37:53.66894637 +0000 UTC m=+1274.816188361" watchObservedRunningTime="2026-01-23 16:37:53.675056316 +0000 UTC m=+1274.822298307" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.675361 4718 generic.go:334] "Generic (PLEG): container finished" podID="23f6a937-f508-45a9-a8cd-c5e6527f0e1d" containerID="5b59db07c56271931e379276176a242c2c7ec1b18240cb036f9229078eddf734" exitCode=143 Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.675393 4718 generic.go:334] "Generic (PLEG): container finished" podID="23f6a937-f508-45a9-a8cd-c5e6527f0e1d" containerID="6e660c5343d5ee2485c72cbfee7c9261cf363561167b4c7bf183110ff6db4c13" exitCode=143 Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.675429 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.675441 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"23f6a937-f508-45a9-a8cd-c5e6527f0e1d","Type":"ContainerDied","Data":"5b59db07c56271931e379276176a242c2c7ec1b18240cb036f9229078eddf734"} Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.675556 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"23f6a937-f508-45a9-a8cd-c5e6527f0e1d","Type":"ContainerDied","Data":"6e660c5343d5ee2485c72cbfee7c9261cf363561167b4c7bf183110ff6db4c13"} Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.675579 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"23f6a937-f508-45a9-a8cd-c5e6527f0e1d","Type":"ContainerDied","Data":"27c039fa09b62a61bdd190e7c9d091bc2a05815692130e53524f0384458018f8"} Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.675599 4718 scope.go:117] "RemoveContainer" containerID="5b59db07c56271931e379276176a242c2c7ec1b18240cb036f9229078eddf734" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.683191 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d99c9a2d-5a68-454a-9516-24e28ef12bb5","Type":"ContainerStarted","Data":"1e18f3e22b1e55ac01afe5a1095f50f76452ed771766c2719f494564316c91b6"} Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.691731 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k9cg2" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.692641 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k9cg2" event={"ID":"272fb7e3-72ae-4687-9d58-88f35cdb18e2","Type":"ContainerDied","Data":"9f19080e0a3fdf8c6b7f462148660f5a5e5743fce651d56f0869aa622f86b93b"} Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.692703 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f19080e0a3fdf8c6b7f462148660f5a5e5743fce651d56f0869aa622f86b93b" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.746151 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-combined-ca-bundle\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.746254 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-config\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.746290 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-internal-tls-certs\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.746382 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-httpd-config\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.746424 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-ovndb-tls-certs\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.746864 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-public-tls-certs\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.746921 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frjk2\" (UniqueName: \"kubernetes.io/projected/21c0d3dd-fddc-4460-9bf6-89df19751954-kube-api-access-frjk2\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.751721 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-internal-tls-certs\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.755330 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-combined-ca-bundle\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.755761 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-config\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.757404 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-ovndb-tls-certs\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.760759 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-public-tls-certs\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.763825 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-httpd-config\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.764956 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frjk2\" (UniqueName: \"kubernetes.io/projected/21c0d3dd-fddc-4460-9bf6-89df19751954-kube-api-access-frjk2\") pod \"neutron-7cf46bcc5-bh2nc\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.860909 4718 scope.go:117] "RemoveContainer" containerID="6e660c5343d5ee2485c72cbfee7c9261cf363561167b4c7bf183110ff6db4c13" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.895742 4718 scope.go:117] "RemoveContainer" containerID="5b59db07c56271931e379276176a242c2c7ec1b18240cb036f9229078eddf734" Jan 23 16:37:53 crc kubenswrapper[4718]: E0123 16:37:53.896898 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b59db07c56271931e379276176a242c2c7ec1b18240cb036f9229078eddf734\": container with ID starting with 5b59db07c56271931e379276176a242c2c7ec1b18240cb036f9229078eddf734 not found: ID does not exist" containerID="5b59db07c56271931e379276176a242c2c7ec1b18240cb036f9229078eddf734" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.896960 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b59db07c56271931e379276176a242c2c7ec1b18240cb036f9229078eddf734"} err="failed to get container status \"5b59db07c56271931e379276176a242c2c7ec1b18240cb036f9229078eddf734\": rpc error: code = NotFound desc = could not find container \"5b59db07c56271931e379276176a242c2c7ec1b18240cb036f9229078eddf734\": container with ID starting with 5b59db07c56271931e379276176a242c2c7ec1b18240cb036f9229078eddf734 not found: ID does not exist" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.896994 4718 scope.go:117] "RemoveContainer" containerID="6e660c5343d5ee2485c72cbfee7c9261cf363561167b4c7bf183110ff6db4c13" Jan 23 16:37:53 crc kubenswrapper[4718]: E0123 16:37:53.899556 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e660c5343d5ee2485c72cbfee7c9261cf363561167b4c7bf183110ff6db4c13\": container with ID starting with 6e660c5343d5ee2485c72cbfee7c9261cf363561167b4c7bf183110ff6db4c13 not found: ID does not exist" containerID="6e660c5343d5ee2485c72cbfee7c9261cf363561167b4c7bf183110ff6db4c13" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.899612 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e660c5343d5ee2485c72cbfee7c9261cf363561167b4c7bf183110ff6db4c13"} err="failed to get container status \"6e660c5343d5ee2485c72cbfee7c9261cf363561167b4c7bf183110ff6db4c13\": rpc error: code = NotFound desc = could not find container \"6e660c5343d5ee2485c72cbfee7c9261cf363561167b4c7bf183110ff6db4c13\": container with ID starting with 6e660c5343d5ee2485c72cbfee7c9261cf363561167b4c7bf183110ff6db4c13 not found: ID does not exist" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.899668 4718 scope.go:117] "RemoveContainer" containerID="5b59db07c56271931e379276176a242c2c7ec1b18240cb036f9229078eddf734" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.908678 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.919093 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b59db07c56271931e379276176a242c2c7ec1b18240cb036f9229078eddf734"} err="failed to get container status \"5b59db07c56271931e379276176a242c2c7ec1b18240cb036f9229078eddf734\": rpc error: code = NotFound desc = could not find container \"5b59db07c56271931e379276176a242c2c7ec1b18240cb036f9229078eddf734\": container with ID starting with 5b59db07c56271931e379276176a242c2c7ec1b18240cb036f9229078eddf734 not found: ID does not exist" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.919157 4718 scope.go:117] "RemoveContainer" containerID="6e660c5343d5ee2485c72cbfee7c9261cf363561167b4c7bf183110ff6db4c13" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.921357 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e660c5343d5ee2485c72cbfee7c9261cf363561167b4c7bf183110ff6db4c13"} err="failed to get container status \"6e660c5343d5ee2485c72cbfee7c9261cf363561167b4c7bf183110ff6db4c13\": rpc error: code = NotFound desc = could not find container \"6e660c5343d5ee2485c72cbfee7c9261cf363561167b4c7bf183110ff6db4c13\": container with ID starting with 6e660c5343d5ee2485c72cbfee7c9261cf363561167b4c7bf183110ff6db4c13 not found: ID does not exist" Jan 23 16:37:53 crc kubenswrapper[4718]: I0123 16:37:53.933939 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.055242 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-logs\") pod \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.055332 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-combined-ca-bundle\") pod \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.055489 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-scripts\") pod \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.055720 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-httpd-run\") pod \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.055752 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8rbs\" (UniqueName: \"kubernetes.io/projected/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-kube-api-access-d8rbs\") pod \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.055818 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-config-data\") pod \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.056129 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\") pod \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\" (UID: \"1d5cdeed-5542-49ea-ae74-1272ec4f60b3\") " Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.064028 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-logs" (OuterVolumeSpecName: "logs") pod "1d5cdeed-5542-49ea-ae74-1272ec4f60b3" (UID: "1d5cdeed-5542-49ea-ae74-1272ec4f60b3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.065399 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1d5cdeed-5542-49ea-ae74-1272ec4f60b3" (UID: "1d5cdeed-5542-49ea-ae74-1272ec4f60b3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.072934 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-kube-api-access-d8rbs" (OuterVolumeSpecName: "kube-api-access-d8rbs") pod "1d5cdeed-5542-49ea-ae74-1272ec4f60b3" (UID: "1d5cdeed-5542-49ea-ae74-1272ec4f60b3"). InnerVolumeSpecName "kube-api-access-d8rbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.086917 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-scripts" (OuterVolumeSpecName: "scripts") pod "1d5cdeed-5542-49ea-ae74-1272ec4f60b3" (UID: "1d5cdeed-5542-49ea-ae74-1272ec4f60b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.164620 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d5cdeed-5542-49ea-ae74-1272ec4f60b3" (UID: "1d5cdeed-5542-49ea-ae74-1272ec4f60b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.191983 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") pod \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\" (UID: \"23f6a937-f508-45a9-a8cd-c5e6527f0e1d\") " Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.219942 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-logs\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.219988 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.220002 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.220015 4718 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.220024 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8rbs\" (UniqueName: \"kubernetes.io/projected/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-kube-api-access-d8rbs\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.356353 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-config-data" (OuterVolumeSpecName: "config-data") pod "1d5cdeed-5542-49ea-ae74-1272ec4f60b3" (UID: "1d5cdeed-5542-49ea-ae74-1272ec4f60b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.377695 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8" (OuterVolumeSpecName: "glance") pod "23f6a937-f508-45a9-a8cd-c5e6527f0e1d" (UID: "23f6a937-f508-45a9-a8cd-c5e6527f0e1d"). InnerVolumeSpecName "pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.395134 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f" (OuterVolumeSpecName: "glance") pod "1d5cdeed-5542-49ea-ae74-1272ec4f60b3" (UID: "1d5cdeed-5542-49ea-ae74-1272ec4f60b3"). InnerVolumeSpecName "pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.433167 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5cdeed-5542-49ea-ae74-1272ec4f60b3-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.433240 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") on node \"crc\" " Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.433267 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\") on node \"crc\" " Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.466008 4718 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.466291 4718 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8") on node "crc" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.477816 4718 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.477991 4718 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f") on node "crc" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.534915 4718 reconciler_common.go:293] "Volume detached for volume \"pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.534960 4718 reconciler_common.go:293] "Volume detached for volume \"pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.599795 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.608162 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.623815 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 16:37:54 crc kubenswrapper[4718]: E0123 16:37:54.624592 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d5cdeed-5542-49ea-ae74-1272ec4f60b3" containerName="glance-httpd" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.624613 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d5cdeed-5542-49ea-ae74-1272ec4f60b3" containerName="glance-httpd" Jan 23 16:37:54 crc kubenswrapper[4718]: E0123 16:37:54.624667 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d5cdeed-5542-49ea-ae74-1272ec4f60b3" containerName="glance-log" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.624677 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d5cdeed-5542-49ea-ae74-1272ec4f60b3" containerName="glance-log" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.624955 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d5cdeed-5542-49ea-ae74-1272ec4f60b3" containerName="glance-httpd" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.624983 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d5cdeed-5542-49ea-ae74-1272ec4f60b3" containerName="glance-log" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.626700 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.629819 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.630143 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.634180 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.712354 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.744620 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.744765 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.744796 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kznf5\" (UniqueName: \"kubernetes.io/projected/b3f14be9-3ab8-4e54-852d-82a373d11028-kube-api-access-kznf5\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.744867 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.744892 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.744913 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f14be9-3ab8-4e54-852d-82a373d11028-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.744927 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3f14be9-3ab8-4e54-852d-82a373d11028-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.744944 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.757678 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.773924 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.786692 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.791574 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.794030 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.795827 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.827180 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.847154 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.847244 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.847278 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kznf5\" (UniqueName: \"kubernetes.io/projected/b3f14be9-3ab8-4e54-852d-82a373d11028-kube-api-access-kznf5\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.847338 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.847374 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.847392 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f14be9-3ab8-4e54-852d-82a373d11028-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.847411 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3f14be9-3ab8-4e54-852d-82a373d11028-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.847431 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.850207 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f14be9-3ab8-4e54-852d-82a373d11028-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.850606 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3f14be9-3ab8-4e54-852d-82a373d11028-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.855504 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.855814 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.856366 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.856390 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6fdb96493887a375a1e9d3a0dda74de9dd624a402ba78fff08bb357f5ac00041/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.856429 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.862127 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.870601 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kznf5\" (UniqueName: \"kubernetes.io/projected/b3f14be9-3ab8-4e54-852d-82a373d11028-kube-api-access-kznf5\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.901325 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7cf46bcc5-bh2nc"] Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.921296 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") pod \"glance-default-internal-api-0\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.950230 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.950934 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-scripts\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.950986 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-logs\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.951059 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-config-data\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.951081 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.951111 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.951204 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.951285 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhbln\" (UniqueName: \"kubernetes.io/projected/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-kube-api-access-nhbln\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:54 crc kubenswrapper[4718]: I0123 16:37:54.963629 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.053456 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhbln\" (UniqueName: \"kubernetes.io/projected/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-kube-api-access-nhbln\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.053534 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.053565 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-scripts\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.053600 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-logs\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.053676 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-config-data\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.053728 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.053749 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.053835 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.054604 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-logs\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.054938 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.061001 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.062683 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.062742 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/41586b6fcb1f91b026169314f421fada9e29acb3bb28133ae98a72e7e358c6a3/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.063441 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.063874 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-scripts\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.076789 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-config-data\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.082516 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhbln\" (UniqueName: \"kubernetes.io/projected/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-kube-api-access-nhbln\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.149007 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\") pod \"glance-default-external-api-0\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " pod="openstack/glance-default-external-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.161867 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d5cdeed-5542-49ea-ae74-1272ec4f60b3" path="/var/lib/kubelet/pods/1d5cdeed-5542-49ea-ae74-1272ec4f60b3/volumes" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.163196 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23f6a937-f508-45a9-a8cd-c5e6527f0e1d" path="/var/lib/kubelet/pods/23f6a937-f508-45a9-a8cd-c5e6527f0e1d/volumes" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.422559 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.694680 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.751094 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cf46bcc5-bh2nc" event={"ID":"21c0d3dd-fddc-4460-9bf6-89df19751954","Type":"ContainerStarted","Data":"920a7b4d135e2096ac3e52cd9501b1a325c99bb36c07452d45b94db6e0e34762"} Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.751142 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cf46bcc5-bh2nc" event={"ID":"21c0d3dd-fddc-4460-9bf6-89df19751954","Type":"ContainerStarted","Data":"625e60bdbbdf2daf1ab19a225b42fccb5ba9ac6267a59094c4e7027ce71bece4"} Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.751169 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cf46bcc5-bh2nc" event={"ID":"21c0d3dd-fddc-4460-9bf6-89df19751954","Type":"ContainerStarted","Data":"3063cc15463493693c745a846d696eac94ec513d31a7949f097c5f9ff60e0b9a"} Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.752090 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.759413 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3f14be9-3ab8-4e54-852d-82a373d11028","Type":"ContainerStarted","Data":"caf35a4b84378fd8a4d6712924276ad7b0d5dfe34b1f9dcb16dd753972dd88b9"} Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.765458 4718 generic.go:334] "Generic (PLEG): container finished" podID="beac9063-62c9-4cb1-aa45-786d02b1e9db" containerID="d3d02f885ca0ab08a56d12705e8fc0fd29afb1b9820ba4c7b07312b64139eb4b" exitCode=0 Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.765517 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bvdgb" event={"ID":"beac9063-62c9-4cb1-aa45-786d02b1e9db","Type":"ContainerDied","Data":"d3d02f885ca0ab08a56d12705e8fc0fd29afb1b9820ba4c7b07312b64139eb4b"} Jan 23 16:37:55 crc kubenswrapper[4718]: I0123 16:37:55.788120 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7cf46bcc5-bh2nc" podStartSLOduration=2.788097129 podStartE2EDuration="2.788097129s" podCreationTimestamp="2026-01-23 16:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:37:55.785280922 +0000 UTC m=+1276.932522923" watchObservedRunningTime="2026-01-23 16:37:55.788097129 +0000 UTC m=+1276.935339120" Jan 23 16:37:56 crc kubenswrapper[4718]: I0123 16:37:56.107101 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-k9cg2"] Jan 23 16:37:56 crc kubenswrapper[4718]: I0123 16:37:56.132786 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-k9cg2"] Jan 23 16:37:56 crc kubenswrapper[4718]: I0123 16:37:56.180958 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 16:37:56 crc kubenswrapper[4718]: E0123 16:37:56.364235 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27796574_e773_413a_9d32_beb6e99cd093.slice/crio-3f1d84f4b2e206b1284b0c4c3cdf79633cf179efe2ff95cac6f68505a6ff1245.scope\": RecentStats: unable to find data in memory cache]" Jan 23 16:37:56 crc kubenswrapper[4718]: I0123 16:37:56.783396 4718 generic.go:334] "Generic (PLEG): container finished" podID="27796574-e773-413a-9d32-beb6e99cd093" containerID="3f1d84f4b2e206b1284b0c4c3cdf79633cf179efe2ff95cac6f68505a6ff1245" exitCode=0 Jan 23 16:37:56 crc kubenswrapper[4718]: I0123 16:37:56.783819 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9c6mc" event={"ID":"27796574-e773-413a-9d32-beb6e99cd093","Type":"ContainerDied","Data":"3f1d84f4b2e206b1284b0c4c3cdf79633cf179efe2ff95cac6f68505a6ff1245"} Jan 23 16:37:56 crc kubenswrapper[4718]: I0123 16:37:56.792395 4718 generic.go:334] "Generic (PLEG): container finished" podID="2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c" containerID="71cf5ab0e0d3b8ffb7793dbe9fbdb8fe1df804b842806e3b6f135145a12b8a29" exitCode=0 Jan 23 16:37:56 crc kubenswrapper[4718]: I0123 16:37:56.792508 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xpbng" event={"ID":"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c","Type":"ContainerDied","Data":"71cf5ab0e0d3b8ffb7793dbe9fbdb8fe1df804b842806e3b6f135145a12b8a29"} Jan 23 16:37:56 crc kubenswrapper[4718]: I0123 16:37:56.796784 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3f14be9-3ab8-4e54-852d-82a373d11028","Type":"ContainerStarted","Data":"c7d9778bd38f9b1dd30e6e5fc91959033ff5becb7bacc4bbe4a8f9af2ae57aa7"} Jan 23 16:37:56 crc kubenswrapper[4718]: I0123 16:37:56.799347 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e","Type":"ContainerStarted","Data":"342a68aa103d9a8034134c2134bb38b9a8df9588475b30d339f06607c8b8accb"} Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.216077 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="272fb7e3-72ae-4687-9d58-88f35cdb18e2" path="/var/lib/kubelet/pods/272fb7e3-72ae-4687-9d58-88f35cdb18e2/volumes" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.271019 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.384173 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beac9063-62c9-4cb1-aa45-786d02b1e9db-scripts\") pod \"beac9063-62c9-4cb1-aa45-786d02b1e9db\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.384307 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beac9063-62c9-4cb1-aa45-786d02b1e9db-logs\") pod \"beac9063-62c9-4cb1-aa45-786d02b1e9db\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.384511 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgtxs\" (UniqueName: \"kubernetes.io/projected/beac9063-62c9-4cb1-aa45-786d02b1e9db-kube-api-access-qgtxs\") pod \"beac9063-62c9-4cb1-aa45-786d02b1e9db\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.384599 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beac9063-62c9-4cb1-aa45-786d02b1e9db-config-data\") pod \"beac9063-62c9-4cb1-aa45-786d02b1e9db\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.384885 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beac9063-62c9-4cb1-aa45-786d02b1e9db-combined-ca-bundle\") pod \"beac9063-62c9-4cb1-aa45-786d02b1e9db\" (UID: \"beac9063-62c9-4cb1-aa45-786d02b1e9db\") " Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.384978 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beac9063-62c9-4cb1-aa45-786d02b1e9db-logs" (OuterVolumeSpecName: "logs") pod "beac9063-62c9-4cb1-aa45-786d02b1e9db" (UID: "beac9063-62c9-4cb1-aa45-786d02b1e9db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.385557 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beac9063-62c9-4cb1-aa45-786d02b1e9db-logs\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.392572 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beac9063-62c9-4cb1-aa45-786d02b1e9db-kube-api-access-qgtxs" (OuterVolumeSpecName: "kube-api-access-qgtxs") pod "beac9063-62c9-4cb1-aa45-786d02b1e9db" (UID: "beac9063-62c9-4cb1-aa45-786d02b1e9db"). InnerVolumeSpecName "kube-api-access-qgtxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.394728 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beac9063-62c9-4cb1-aa45-786d02b1e9db-scripts" (OuterVolumeSpecName: "scripts") pod "beac9063-62c9-4cb1-aa45-786d02b1e9db" (UID: "beac9063-62c9-4cb1-aa45-786d02b1e9db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.438402 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beac9063-62c9-4cb1-aa45-786d02b1e9db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "beac9063-62c9-4cb1-aa45-786d02b1e9db" (UID: "beac9063-62c9-4cb1-aa45-786d02b1e9db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.450416 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beac9063-62c9-4cb1-aa45-786d02b1e9db-config-data" (OuterVolumeSpecName: "config-data") pod "beac9063-62c9-4cb1-aa45-786d02b1e9db" (UID: "beac9063-62c9-4cb1-aa45-786d02b1e9db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.487648 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgtxs\" (UniqueName: \"kubernetes.io/projected/beac9063-62c9-4cb1-aa45-786d02b1e9db-kube-api-access-qgtxs\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.488201 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beac9063-62c9-4cb1-aa45-786d02b1e9db-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.488237 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beac9063-62c9-4cb1-aa45-786d02b1e9db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.488246 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beac9063-62c9-4cb1-aa45-786d02b1e9db-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.494695 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zrnrc"] Jan 23 16:37:57 crc kubenswrapper[4718]: E0123 16:37:57.495331 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beac9063-62c9-4cb1-aa45-786d02b1e9db" containerName="placement-db-sync" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.495348 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="beac9063-62c9-4cb1-aa45-786d02b1e9db" containerName="placement-db-sync" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.496737 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="beac9063-62c9-4cb1-aa45-786d02b1e9db" containerName="placement-db-sync" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.497838 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zrnrc" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.503289 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.521064 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zrnrc"] Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.590967 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz2sr\" (UniqueName: \"kubernetes.io/projected/a730641b-f7e3-4f6f-8aae-8acf92f37ca1-kube-api-access-hz2sr\") pod \"root-account-create-update-zrnrc\" (UID: \"a730641b-f7e3-4f6f-8aae-8acf92f37ca1\") " pod="openstack/root-account-create-update-zrnrc" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.591099 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a730641b-f7e3-4f6f-8aae-8acf92f37ca1-operator-scripts\") pod \"root-account-create-update-zrnrc\" (UID: \"a730641b-f7e3-4f6f-8aae-8acf92f37ca1\") " pod="openstack/root-account-create-update-zrnrc" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.693321 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a730641b-f7e3-4f6f-8aae-8acf92f37ca1-operator-scripts\") pod \"root-account-create-update-zrnrc\" (UID: \"a730641b-f7e3-4f6f-8aae-8acf92f37ca1\") " pod="openstack/root-account-create-update-zrnrc" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.693511 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz2sr\" (UniqueName: \"kubernetes.io/projected/a730641b-f7e3-4f6f-8aae-8acf92f37ca1-kube-api-access-hz2sr\") pod \"root-account-create-update-zrnrc\" (UID: \"a730641b-f7e3-4f6f-8aae-8acf92f37ca1\") " pod="openstack/root-account-create-update-zrnrc" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.694153 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a730641b-f7e3-4f6f-8aae-8acf92f37ca1-operator-scripts\") pod \"root-account-create-update-zrnrc\" (UID: \"a730641b-f7e3-4f6f-8aae-8acf92f37ca1\") " pod="openstack/root-account-create-update-zrnrc" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.741622 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz2sr\" (UniqueName: \"kubernetes.io/projected/a730641b-f7e3-4f6f-8aae-8acf92f37ca1-kube-api-access-hz2sr\") pod \"root-account-create-update-zrnrc\" (UID: \"a730641b-f7e3-4f6f-8aae-8acf92f37ca1\") " pod="openstack/root-account-create-update-zrnrc" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.826876 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zrnrc" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.839211 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3f14be9-3ab8-4e54-852d-82a373d11028","Type":"ContainerStarted","Data":"ab09c8bb2b1ab814b21185f83d61eb76e4b8bd0e37fb09f239e6729b3f99f6df"} Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.864794 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e","Type":"ContainerStarted","Data":"c835b0e0608bfeabe82e8bde61b1e8328bdbbff6fb51fd300ae25efdc40fdbb5"} Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.875343 4718 generic.go:334] "Generic (PLEG): container finished" podID="a3d57ff1-707c-4dd4-8922-1d910f52faf8" containerID="2db9cf20fa6b20c906e87b700696cf5dc6e88650dd35a9572ed443c563377bed" exitCode=0 Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.875495 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-q47zv" event={"ID":"a3d57ff1-707c-4dd4-8922-1d910f52faf8","Type":"ContainerDied","Data":"2db9cf20fa6b20c906e87b700696cf5dc6e88650dd35a9572ed443c563377bed"} Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.883812 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bvdgb" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.883807 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bvdgb" event={"ID":"beac9063-62c9-4cb1-aa45-786d02b1e9db","Type":"ContainerDied","Data":"0926dd00466fa192a04c8f3825f57e488044edbadfa6a8627109d23fcd9bac85"} Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.883861 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0926dd00466fa192a04c8f3825f57e488044edbadfa6a8627109d23fcd9bac85" Jan 23 16:37:57 crc kubenswrapper[4718]: I0123 16:37:57.887019 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.886992677 podStartE2EDuration="3.886992677s" podCreationTimestamp="2026-01-23 16:37:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:37:57.874199379 +0000 UTC m=+1279.021441370" watchObservedRunningTime="2026-01-23 16:37:57.886992677 +0000 UTC m=+1279.034234668" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.044585 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6dffc5fb8-5997w"] Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.047167 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.052317 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.052749 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.053405 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.053536 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.053685 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8sdrh" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.081174 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6dffc5fb8-5997w"] Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.211087 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4878ef2b-0a67-424e-95b7-53803746d9f3-scripts\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.211728 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4878ef2b-0a67-424e-95b7-53803746d9f3-logs\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.211855 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4878ef2b-0a67-424e-95b7-53803746d9f3-public-tls-certs\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.212112 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4878ef2b-0a67-424e-95b7-53803746d9f3-internal-tls-certs\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.212168 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4878ef2b-0a67-424e-95b7-53803746d9f3-config-data\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.212213 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4878ef2b-0a67-424e-95b7-53803746d9f3-combined-ca-bundle\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.212439 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgqrd\" (UniqueName: \"kubernetes.io/projected/4878ef2b-0a67-424e-95b7-53803746d9f3-kube-api-access-zgqrd\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.315876 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4878ef2b-0a67-424e-95b7-53803746d9f3-scripts\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.315971 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4878ef2b-0a67-424e-95b7-53803746d9f3-logs\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.316050 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4878ef2b-0a67-424e-95b7-53803746d9f3-public-tls-certs\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.316106 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4878ef2b-0a67-424e-95b7-53803746d9f3-internal-tls-certs\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.316126 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4878ef2b-0a67-424e-95b7-53803746d9f3-config-data\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.316150 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4878ef2b-0a67-424e-95b7-53803746d9f3-combined-ca-bundle\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.316201 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgqrd\" (UniqueName: \"kubernetes.io/projected/4878ef2b-0a67-424e-95b7-53803746d9f3-kube-api-access-zgqrd\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.327203 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4878ef2b-0a67-424e-95b7-53803746d9f3-logs\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.341618 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4878ef2b-0a67-424e-95b7-53803746d9f3-combined-ca-bundle\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.341733 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4878ef2b-0a67-424e-95b7-53803746d9f3-internal-tls-certs\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.342743 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4878ef2b-0a67-424e-95b7-53803746d9f3-config-data\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.346987 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4878ef2b-0a67-424e-95b7-53803746d9f3-scripts\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.361334 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgqrd\" (UniqueName: \"kubernetes.io/projected/4878ef2b-0a67-424e-95b7-53803746d9f3-kube-api-access-zgqrd\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.361822 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4878ef2b-0a67-424e-95b7-53803746d9f3-public-tls-certs\") pod \"placement-6dffc5fb8-5997w\" (UID: \"4878ef2b-0a67-424e-95b7-53803746d9f3\") " pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.424006 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.492088 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xpbng" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.630405 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c-combined-ca-bundle\") pod \"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c\" (UID: \"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c\") " Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.631156 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m2sv\" (UniqueName: \"kubernetes.io/projected/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c-kube-api-access-9m2sv\") pod \"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c\" (UID: \"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c\") " Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.631384 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c-db-sync-config-data\") pod \"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c\" (UID: \"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c\") " Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.648730 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c" (UID: "2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.653535 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c-kube-api-access-9m2sv" (OuterVolumeSpecName: "kube-api-access-9m2sv") pod "2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c" (UID: "2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c"). InnerVolumeSpecName "kube-api-access-9m2sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.697523 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c" (UID: "2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.732458 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zrnrc"] Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.740654 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.740700 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m2sv\" (UniqueName: \"kubernetes.io/projected/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c-kube-api-access-9m2sv\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.740716 4718 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.815489 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.875146 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.875221 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.875290 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.876470 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"90a1176f010d8fdadb1a7f6d6d0caefb9ea6ac28d367938b6700683923e3d094"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.876560 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://90a1176f010d8fdadb1a7f6d6d0caefb9ea6ac28d367938b6700683923e3d094" gracePeriod=600 Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.904551 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e","Type":"ContainerStarted","Data":"aef28a08b2106dce09891bf581414767268366f60e5b60a0a10499e6de041f0c"} Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.912158 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zrnrc" event={"ID":"a730641b-f7e3-4f6f-8aae-8acf92f37ca1","Type":"ContainerStarted","Data":"d2161b31ad280bfc1a7d3c524b64219d5699beb121cc4609c32939771c85ca6a"} Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.916417 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9c6mc" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.916893 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9c6mc" event={"ID":"27796574-e773-413a-9d32-beb6e99cd093","Type":"ContainerDied","Data":"c42c7af7c93eccb504889a28e338dd54526af7dd6023c4208e05e17877c30840"} Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.917081 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c42c7af7c93eccb504889a28e338dd54526af7dd6023c4208e05e17877c30840" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.929525 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xpbng" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.929993 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xpbng" event={"ID":"2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c","Type":"ContainerDied","Data":"1f2cec2b483ec096ba8d6c2f9987dfce771acc52a1d52a6fad0cdb0661ae2a95"} Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.930044 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f2cec2b483ec096ba8d6c2f9987dfce771acc52a1d52a6fad0cdb0661ae2a95" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.933722 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.933709462 podStartE2EDuration="4.933709462s" podCreationTimestamp="2026-01-23 16:37:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:37:58.926186028 +0000 UTC m=+1280.073428019" watchObservedRunningTime="2026-01-23 16:37:58.933709462 +0000 UTC m=+1280.080951453" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.958271 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-scripts\") pod \"27796574-e773-413a-9d32-beb6e99cd093\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.958351 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-config-data\") pod \"27796574-e773-413a-9d32-beb6e99cd093\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.958459 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-credential-keys\") pod \"27796574-e773-413a-9d32-beb6e99cd093\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.958480 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzzkv\" (UniqueName: \"kubernetes.io/projected/27796574-e773-413a-9d32-beb6e99cd093-kube-api-access-rzzkv\") pod \"27796574-e773-413a-9d32-beb6e99cd093\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.958595 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-combined-ca-bundle\") pod \"27796574-e773-413a-9d32-beb6e99cd093\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.958673 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-fernet-keys\") pod \"27796574-e773-413a-9d32-beb6e99cd093\" (UID: \"27796574-e773-413a-9d32-beb6e99cd093\") " Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.966790 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "27796574-e773-413a-9d32-beb6e99cd093" (UID: "27796574-e773-413a-9d32-beb6e99cd093"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.967237 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27796574-e773-413a-9d32-beb6e99cd093-kube-api-access-rzzkv" (OuterVolumeSpecName: "kube-api-access-rzzkv") pod "27796574-e773-413a-9d32-beb6e99cd093" (UID: "27796574-e773-413a-9d32-beb6e99cd093"). InnerVolumeSpecName "kube-api-access-rzzkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.968039 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "27796574-e773-413a-9d32-beb6e99cd093" (UID: "27796574-e773-413a-9d32-beb6e99cd093"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:58 crc kubenswrapper[4718]: I0123 16:37:58.972822 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-scripts" (OuterVolumeSpecName: "scripts") pod "27796574-e773-413a-9d32-beb6e99cd093" (UID: "27796574-e773-413a-9d32-beb6e99cd093"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.027148 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-config-data" (OuterVolumeSpecName: "config-data") pod "27796574-e773-413a-9d32-beb6e99cd093" (UID: "27796574-e773-413a-9d32-beb6e99cd093"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.031529 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27796574-e773-413a-9d32-beb6e99cd093" (UID: "27796574-e773-413a-9d32-beb6e99cd093"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.061880 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.061931 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.061944 4718 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.061954 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzzkv\" (UniqueName: \"kubernetes.io/projected/27796574-e773-413a-9d32-beb6e99cd093-kube-api-access-rzzkv\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.061964 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.061975 4718 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27796574-e773-413a-9d32-beb6e99cd093-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.067925 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6dffc5fb8-5997w"] Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.229532 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-78f48f99c9-tjmw2"] Jan 23 16:37:59 crc kubenswrapper[4718]: E0123 16:37:59.238917 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c" containerName="barbican-db-sync" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.238948 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c" containerName="barbican-db-sync" Jan 23 16:37:59 crc kubenswrapper[4718]: E0123 16:37:59.238986 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27796574-e773-413a-9d32-beb6e99cd093" containerName="keystone-bootstrap" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.238996 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="27796574-e773-413a-9d32-beb6e99cd093" containerName="keystone-bootstrap" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.239891 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c" containerName="barbican-db-sync" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.239941 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="27796574-e773-413a-9d32-beb6e99cd093" containerName="keystone-bootstrap" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.245532 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.258950 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.260088 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jds6x" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.260598 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.282450 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-78f48f99c9-tjmw2"] Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.285258 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7faa3883-3a27-4ef9-9da6-476f43ad53e0-logs\") pod \"barbican-worker-78f48f99c9-tjmw2\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.285483 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-config-data-custom\") pod \"barbican-worker-78f48f99c9-tjmw2\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.285525 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-combined-ca-bundle\") pod \"barbican-worker-78f48f99c9-tjmw2\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.285588 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5qct\" (UniqueName: \"kubernetes.io/projected/7faa3883-3a27-4ef9-9da6-476f43ad53e0-kube-api-access-w5qct\") pod \"barbican-worker-78f48f99c9-tjmw2\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.290198 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-config-data\") pod \"barbican-worker-78f48f99c9-tjmw2\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.306149 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-859b68d8fd-fn26w"] Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.327912 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.351718 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-859b68d8fd-fn26w"] Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.354031 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.367306 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-qlsl5"] Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.367791 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" podUID="318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2" containerName="dnsmasq-dns" containerID="cri-o://3f51d37a677af322d8044a539b835c2187199fc463b531c7b5043dbfd8dd759c" gracePeriod=10 Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.368807 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.437930 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-config-data-custom\") pod \"barbican-worker-78f48f99c9-tjmw2\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.437993 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-combined-ca-bundle\") pod \"barbican-worker-78f48f99c9-tjmw2\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.438022 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5qct\" (UniqueName: \"kubernetes.io/projected/7faa3883-3a27-4ef9-9da6-476f43ad53e0-kube-api-access-w5qct\") pod \"barbican-worker-78f48f99c9-tjmw2\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.445235 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-ld95m"] Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.447459 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.458403 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-config-data\") pod \"barbican-worker-78f48f99c9-tjmw2\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.458488 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13e342b5-4486-4b97-8e64-d6a189164e51-config-data-custom\") pod \"barbican-keystone-listener-859b68d8fd-fn26w\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.458542 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7faa3883-3a27-4ef9-9da6-476f43ad53e0-logs\") pod \"barbican-worker-78f48f99c9-tjmw2\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.458667 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv4jg\" (UniqueName: \"kubernetes.io/projected/13e342b5-4486-4b97-8e64-d6a189164e51-kube-api-access-kv4jg\") pod \"barbican-keystone-listener-859b68d8fd-fn26w\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.458707 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e342b5-4486-4b97-8e64-d6a189164e51-combined-ca-bundle\") pod \"barbican-keystone-listener-859b68d8fd-fn26w\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.458755 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13e342b5-4486-4b97-8e64-d6a189164e51-logs\") pod \"barbican-keystone-listener-859b68d8fd-fn26w\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.458799 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e342b5-4486-4b97-8e64-d6a189164e51-config-data\") pod \"barbican-keystone-listener-859b68d8fd-fn26w\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.460303 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-combined-ca-bundle\") pod \"barbican-worker-78f48f99c9-tjmw2\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.461920 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7faa3883-3a27-4ef9-9da6-476f43ad53e0-logs\") pod \"barbican-worker-78f48f99c9-tjmw2\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.479513 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5qct\" (UniqueName: \"kubernetes.io/projected/7faa3883-3a27-4ef9-9da6-476f43ad53e0-kube-api-access-w5qct\") pod \"barbican-worker-78f48f99c9-tjmw2\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.479524 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-config-data\") pod \"barbican-worker-78f48f99c9-tjmw2\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.481360 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-config-data-custom\") pod \"barbican-worker-78f48f99c9-tjmw2\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.553499 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-ld95m"] Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.567963 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-ld95m\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.568009 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-ld95m\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.568027 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5pvl\" (UniqueName: \"kubernetes.io/projected/50e2741c-c631-42d1-bc2a-71292bbcfe61-kube-api-access-l5pvl\") pod \"dnsmasq-dns-848cf88cfc-ld95m\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.568064 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv4jg\" (UniqueName: \"kubernetes.io/projected/13e342b5-4486-4b97-8e64-d6a189164e51-kube-api-access-kv4jg\") pod \"barbican-keystone-listener-859b68d8fd-fn26w\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.568082 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e342b5-4486-4b97-8e64-d6a189164e51-combined-ca-bundle\") pod \"barbican-keystone-listener-859b68d8fd-fn26w\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.568110 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13e342b5-4486-4b97-8e64-d6a189164e51-logs\") pod \"barbican-keystone-listener-859b68d8fd-fn26w\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.568133 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e342b5-4486-4b97-8e64-d6a189164e51-config-data\") pod \"barbican-keystone-listener-859b68d8fd-fn26w\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.568185 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-config\") pod \"dnsmasq-dns-848cf88cfc-ld95m\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.568259 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-ld95m\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.568297 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-ld95m\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.568346 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13e342b5-4486-4b97-8e64-d6a189164e51-config-data-custom\") pod \"barbican-keystone-listener-859b68d8fd-fn26w\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.582924 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13e342b5-4486-4b97-8e64-d6a189164e51-logs\") pod \"barbican-keystone-listener-859b68d8fd-fn26w\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.594936 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13e342b5-4486-4b97-8e64-d6a189164e51-config-data-custom\") pod \"barbican-keystone-listener-859b68d8fd-fn26w\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.597650 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e342b5-4486-4b97-8e64-d6a189164e51-config-data\") pod \"barbican-keystone-listener-859b68d8fd-fn26w\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.603363 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e342b5-4486-4b97-8e64-d6a189164e51-combined-ca-bundle\") pod \"barbican-keystone-listener-859b68d8fd-fn26w\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.604464 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.672796 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-ld95m\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.673017 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-ld95m\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.673137 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5pvl\" (UniqueName: \"kubernetes.io/projected/50e2741c-c631-42d1-bc2a-71292bbcfe61-kube-api-access-l5pvl\") pod \"dnsmasq-dns-848cf88cfc-ld95m\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.673345 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-config\") pod \"dnsmasq-dns-848cf88cfc-ld95m\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.673522 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-ld95m\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.673651 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-ld95m\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.674878 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-ld95m\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.675606 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-ld95m\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.676193 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-ld95m\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.677362 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-config\") pod \"dnsmasq-dns-848cf88cfc-ld95m\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.678497 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-ld95m\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.686126 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv4jg\" (UniqueName: \"kubernetes.io/projected/13e342b5-4486-4b97-8e64-d6a189164e51-kube-api-access-kv4jg\") pod \"barbican-keystone-listener-859b68d8fd-fn26w\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.698796 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5pvl\" (UniqueName: \"kubernetes.io/projected/50e2741c-c631-42d1-bc2a-71292bbcfe61-kube-api-access-l5pvl\") pod \"dnsmasq-dns-848cf88cfc-ld95m\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.779168 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.795336 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5fd84bc878-plqxp"] Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.798066 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.817421 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.829177 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.838972 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-q47zv" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.880227 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-config-data-custom\") pod \"barbican-api-5fd84bc878-plqxp\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.880502 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf5j9\" (UniqueName: \"kubernetes.io/projected/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-kube-api-access-bf5j9\") pod \"barbican-api-5fd84bc878-plqxp\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.880604 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-combined-ca-bundle\") pod \"barbican-api-5fd84bc878-plqxp\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.893858 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-logs\") pod \"barbican-api-5fd84bc878-plqxp\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.894137 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-config-data\") pod \"barbican-api-5fd84bc878-plqxp\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:37:59 crc kubenswrapper[4718]: I0123 16:37:59.927088 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5fd84bc878-plqxp"] Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.001614 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksm4m\" (UniqueName: \"kubernetes.io/projected/a3d57ff1-707c-4dd4-8922-1d910f52faf8-kube-api-access-ksm4m\") pod \"a3d57ff1-707c-4dd4-8922-1d910f52faf8\" (UID: \"a3d57ff1-707c-4dd4-8922-1d910f52faf8\") " Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.001950 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d57ff1-707c-4dd4-8922-1d910f52faf8-combined-ca-bundle\") pod \"a3d57ff1-707c-4dd4-8922-1d910f52faf8\" (UID: \"a3d57ff1-707c-4dd4-8922-1d910f52faf8\") " Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.002108 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d57ff1-707c-4dd4-8922-1d910f52faf8-config-data\") pod \"a3d57ff1-707c-4dd4-8922-1d910f52faf8\" (UID: \"a3d57ff1-707c-4dd4-8922-1d910f52faf8\") " Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.006182 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-combined-ca-bundle\") pod \"barbican-api-5fd84bc878-plqxp\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.006285 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-logs\") pod \"barbican-api-5fd84bc878-plqxp\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.006391 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-config-data\") pod \"barbican-api-5fd84bc878-plqxp\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.006473 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-config-data-custom\") pod \"barbican-api-5fd84bc878-plqxp\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.006517 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf5j9\" (UniqueName: \"kubernetes.io/projected/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-kube-api-access-bf5j9\") pod \"barbican-api-5fd84bc878-plqxp\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.010541 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-logs\") pod \"barbican-api-5fd84bc878-plqxp\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.014420 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6dffc5fb8-5997w" event={"ID":"4878ef2b-0a67-424e-95b7-53803746d9f3","Type":"ContainerStarted","Data":"668d284c852d2f1f4a136d5eb20ffafc110b22a42fa806cddcbaa6e621c87c9d"} Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.017006 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-config-data-custom\") pod \"barbican-api-5fd84bc878-plqxp\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.023602 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-combined-ca-bundle\") pod \"barbican-api-5fd84bc878-plqxp\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.024973 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-config-data\") pod \"barbican-api-5fd84bc878-plqxp\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.055878 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d57ff1-707c-4dd4-8922-1d910f52faf8-kube-api-access-ksm4m" (OuterVolumeSpecName: "kube-api-access-ksm4m") pod "a3d57ff1-707c-4dd4-8922-1d910f52faf8" (UID: "a3d57ff1-707c-4dd4-8922-1d910f52faf8"). InnerVolumeSpecName "kube-api-access-ksm4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.062246 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf5j9\" (UniqueName: \"kubernetes.io/projected/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-kube-api-access-bf5j9\") pod \"barbican-api-5fd84bc878-plqxp\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.068148 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="90a1176f010d8fdadb1a7f6d6d0caefb9ea6ac28d367938b6700683923e3d094" exitCode=0 Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.068251 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"90a1176f010d8fdadb1a7f6d6d0caefb9ea6ac28d367938b6700683923e3d094"} Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.068298 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"6f9740f575ccf5aef232552297b1345164a1e07af1b6f8f7ad7a166d05348d0a"} Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.068323 4718 scope.go:117] "RemoveContainer" containerID="bd99bd4b2d73295643906a9aa8c3e87cbbb0c2a9c5d2e4b829796f2135ed44c3" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.105068 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f6b6f5fd7-qpbqz"] Jan 23 16:38:00 crc kubenswrapper[4718]: E0123 16:38:00.106459 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d57ff1-707c-4dd4-8922-1d910f52faf8" containerName="heat-db-sync" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.106989 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d57ff1-707c-4dd4-8922-1d910f52faf8" containerName="heat-db-sync" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.107225 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d57ff1-707c-4dd4-8922-1d910f52faf8" containerName="heat-db-sync" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.119235 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.120849 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksm4m\" (UniqueName: \"kubernetes.io/projected/a3d57ff1-707c-4dd4-8922-1d910f52faf8-kube-api-access-ksm4m\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.139283 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4m7cb" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.139535 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.141932 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.142687 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.142858 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.142958 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.156325 4718 generic.go:334] "Generic (PLEG): container finished" podID="318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2" containerID="3f51d37a677af322d8044a539b835c2187199fc463b531c7b5043dbfd8dd759c" exitCode=0 Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.156443 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" event={"ID":"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2","Type":"ContainerDied","Data":"3f51d37a677af322d8044a539b835c2187199fc463b531c7b5043dbfd8dd759c"} Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.161052 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.178829 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-q47zv" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.194426 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-q47zv" event={"ID":"a3d57ff1-707c-4dd4-8922-1d910f52faf8","Type":"ContainerDied","Data":"466ed69c7ce827179e9d767b0bf0d06fbd915ab6a696f9fa59bf6c116da3cd45"} Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.194523 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="466ed69c7ce827179e9d767b0bf0d06fbd915ab6a696f9fa59bf6c116da3cd45" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.194545 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zrnrc" event={"ID":"a730641b-f7e3-4f6f-8aae-8acf92f37ca1","Type":"ContainerStarted","Data":"d9fb76de9125c923237386a4a7b56a2ce2a6ab5db16b13e71f75d42c05e07922"} Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.194562 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f6b6f5fd7-qpbqz"] Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.226698 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-credential-keys\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.226897 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-scripts\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.227029 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-public-tls-certs\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.227129 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-config-data\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.227149 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-combined-ca-bundle\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.227171 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk8m4\" (UniqueName: \"kubernetes.io/projected/e836bdf5-8379-4f60-8dbe-7be5381ed922-kube-api-access-vk8m4\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.227201 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-internal-tls-certs\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.227235 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-fernet-keys\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.229425 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-zrnrc" podStartSLOduration=3.229400653 podStartE2EDuration="3.229400653s" podCreationTimestamp="2026-01-23 16:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:38:00.216167154 +0000 UTC m=+1281.363409145" watchObservedRunningTime="2026-01-23 16:38:00.229400653 +0000 UTC m=+1281.376642644" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.236849 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d57ff1-707c-4dd4-8922-1d910f52faf8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3d57ff1-707c-4dd4-8922-1d910f52faf8" (UID: "a3d57ff1-707c-4dd4-8922-1d910f52faf8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.304770 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d57ff1-707c-4dd4-8922-1d910f52faf8-config-data" (OuterVolumeSpecName: "config-data") pod "a3d57ff1-707c-4dd4-8922-1d910f52faf8" (UID: "a3d57ff1-707c-4dd4-8922-1d910f52faf8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.329992 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-config-data\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.330073 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-combined-ca-bundle\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.330095 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk8m4\" (UniqueName: \"kubernetes.io/projected/e836bdf5-8379-4f60-8dbe-7be5381ed922-kube-api-access-vk8m4\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.330156 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-internal-tls-certs\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.330199 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-fernet-keys\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.330265 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-credential-keys\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.330365 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-scripts\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.330473 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-public-tls-certs\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.330619 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d57ff1-707c-4dd4-8922-1d910f52faf8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.330669 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d57ff1-707c-4dd4-8922-1d910f52faf8-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.340299 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-config-data\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.340513 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-scripts\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.341361 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-credential-keys\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.353385 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-combined-ca-bundle\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.362316 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-public-tls-certs\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.362858 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-internal-tls-certs\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.371331 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e836bdf5-8379-4f60-8dbe-7be5381ed922-fernet-keys\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.414523 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk8m4\" (UniqueName: \"kubernetes.io/projected/e836bdf5-8379-4f60-8dbe-7be5381ed922-kube-api-access-vk8m4\") pod \"keystone-f6b6f5fd7-qpbqz\" (UID: \"e836bdf5-8379-4f60-8dbe-7be5381ed922\") " pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.480828 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.780750 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.885171 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-655bd97bbb-6cj47"] Jan 23 16:38:00 crc kubenswrapper[4718]: E0123 16:38:00.886390 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2" containerName="init" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.886414 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2" containerName="init" Jan 23 16:38:00 crc kubenswrapper[4718]: E0123 16:38:00.886440 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2" containerName="dnsmasq-dns" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.886447 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2" containerName="dnsmasq-dns" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.886736 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2" containerName="dnsmasq-dns" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.888139 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.905422 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-8474589d6c-tbnqc"] Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.907033 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-dns-svc\") pod \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.907285 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-config\") pod \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.907346 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-ovsdbserver-sb\") pod \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.907655 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-dns-swift-storage-0\") pod \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.907684 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spc8g\" (UniqueName: \"kubernetes.io/projected/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-kube-api-access-spc8g\") pod \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.907771 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-ovsdbserver-nb\") pod \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\" (UID: \"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2\") " Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.916468 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8474589d6c-tbnqc" Jan 23 16:38:00 crc kubenswrapper[4718]: I0123 16:38:00.982413 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-655bd97bbb-6cj47"] Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:00.999944 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-kube-api-access-spc8g" (OuterVolumeSpecName: "kube-api-access-spc8g") pod "318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2" (UID: "318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2"). InnerVolumeSpecName "kube-api-access-spc8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.019499 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12450ab0-8804-4354-83ff-47ca9b58bcec-logs\") pod \"barbican-worker-8474589d6c-tbnqc\" (UID: \"12450ab0-8804-4354-83ff-47ca9b58bcec\") " pod="openstack/barbican-worker-8474589d6c-tbnqc" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.019606 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b146c37c-0473-4db8-a743-72a7576edf59-config-data-custom\") pod \"barbican-keystone-listener-655bd97bbb-6cj47\" (UID: \"b146c37c-0473-4db8-a743-72a7576edf59\") " pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.019666 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldw9f\" (UniqueName: \"kubernetes.io/projected/12450ab0-8804-4354-83ff-47ca9b58bcec-kube-api-access-ldw9f\") pod \"barbican-worker-8474589d6c-tbnqc\" (UID: \"12450ab0-8804-4354-83ff-47ca9b58bcec\") " pod="openstack/barbican-worker-8474589d6c-tbnqc" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.019735 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12450ab0-8804-4354-83ff-47ca9b58bcec-config-data-custom\") pod \"barbican-worker-8474589d6c-tbnqc\" (UID: \"12450ab0-8804-4354-83ff-47ca9b58bcec\") " pod="openstack/barbican-worker-8474589d6c-tbnqc" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.019838 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9f94\" (UniqueName: \"kubernetes.io/projected/b146c37c-0473-4db8-a743-72a7576edf59-kube-api-access-w9f94\") pod \"barbican-keystone-listener-655bd97bbb-6cj47\" (UID: \"b146c37c-0473-4db8-a743-72a7576edf59\") " pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.019962 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12450ab0-8804-4354-83ff-47ca9b58bcec-combined-ca-bundle\") pod \"barbican-worker-8474589d6c-tbnqc\" (UID: \"12450ab0-8804-4354-83ff-47ca9b58bcec\") " pod="openstack/barbican-worker-8474589d6c-tbnqc" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.020026 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12450ab0-8804-4354-83ff-47ca9b58bcec-config-data\") pod \"barbican-worker-8474589d6c-tbnqc\" (UID: \"12450ab0-8804-4354-83ff-47ca9b58bcec\") " pod="openstack/barbican-worker-8474589d6c-tbnqc" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.020074 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b146c37c-0473-4db8-a743-72a7576edf59-config-data\") pod \"barbican-keystone-listener-655bd97bbb-6cj47\" (UID: \"b146c37c-0473-4db8-a743-72a7576edf59\") " pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.020092 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b146c37c-0473-4db8-a743-72a7576edf59-combined-ca-bundle\") pod \"barbican-keystone-listener-655bd97bbb-6cj47\" (UID: \"b146c37c-0473-4db8-a743-72a7576edf59\") " pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.020271 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b146c37c-0473-4db8-a743-72a7576edf59-logs\") pod \"barbican-keystone-listener-655bd97bbb-6cj47\" (UID: \"b146c37c-0473-4db8-a743-72a7576edf59\") " pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.020346 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spc8g\" (UniqueName: \"kubernetes.io/projected/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-kube-api-access-spc8g\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.021898 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-78f48f99c9-tjmw2"] Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.043551 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2" (UID: "318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.051191 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8474589d6c-tbnqc"] Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.063394 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6677fdc7d-j4bjx"] Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.066079 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.080650 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6677fdc7d-j4bjx"] Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.102435 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2" (UID: "318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.105216 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-config" (OuterVolumeSpecName: "config") pod "318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2" (UID: "318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.117730 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2" (UID: "318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.124508 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b146c37c-0473-4db8-a743-72a7576edf59-logs\") pod \"barbican-keystone-listener-655bd97bbb-6cj47\" (UID: \"b146c37c-0473-4db8-a743-72a7576edf59\") " pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.124572 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-config-data\") pod \"barbican-api-6677fdc7d-j4bjx\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.124595 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12450ab0-8804-4354-83ff-47ca9b58bcec-logs\") pod \"barbican-worker-8474589d6c-tbnqc\" (UID: \"12450ab0-8804-4354-83ff-47ca9b58bcec\") " pod="openstack/barbican-worker-8474589d6c-tbnqc" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.124625 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b146c37c-0473-4db8-a743-72a7576edf59-config-data-custom\") pod \"barbican-keystone-listener-655bd97bbb-6cj47\" (UID: \"b146c37c-0473-4db8-a743-72a7576edf59\") " pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.124667 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldw9f\" (UniqueName: \"kubernetes.io/projected/12450ab0-8804-4354-83ff-47ca9b58bcec-kube-api-access-ldw9f\") pod \"barbican-worker-8474589d6c-tbnqc\" (UID: \"12450ab0-8804-4354-83ff-47ca9b58bcec\") " pod="openstack/barbican-worker-8474589d6c-tbnqc" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.124701 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-logs\") pod \"barbican-api-6677fdc7d-j4bjx\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.124823 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12450ab0-8804-4354-83ff-47ca9b58bcec-config-data-custom\") pod \"barbican-worker-8474589d6c-tbnqc\" (UID: \"12450ab0-8804-4354-83ff-47ca9b58bcec\") " pod="openstack/barbican-worker-8474589d6c-tbnqc" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.124868 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9f94\" (UniqueName: \"kubernetes.io/projected/b146c37c-0473-4db8-a743-72a7576edf59-kube-api-access-w9f94\") pod \"barbican-keystone-listener-655bd97bbb-6cj47\" (UID: \"b146c37c-0473-4db8-a743-72a7576edf59\") " pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.124915 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12450ab0-8804-4354-83ff-47ca9b58bcec-combined-ca-bundle\") pod \"barbican-worker-8474589d6c-tbnqc\" (UID: \"12450ab0-8804-4354-83ff-47ca9b58bcec\") " pod="openstack/barbican-worker-8474589d6c-tbnqc" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.124940 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-combined-ca-bundle\") pod \"barbican-api-6677fdc7d-j4bjx\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.124970 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12450ab0-8804-4354-83ff-47ca9b58bcec-config-data\") pod \"barbican-worker-8474589d6c-tbnqc\" (UID: \"12450ab0-8804-4354-83ff-47ca9b58bcec\") " pod="openstack/barbican-worker-8474589d6c-tbnqc" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.124996 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b146c37c-0473-4db8-a743-72a7576edf59-config-data\") pod \"barbican-keystone-listener-655bd97bbb-6cj47\" (UID: \"b146c37c-0473-4db8-a743-72a7576edf59\") " pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.125014 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b146c37c-0473-4db8-a743-72a7576edf59-combined-ca-bundle\") pod \"barbican-keystone-listener-655bd97bbb-6cj47\" (UID: \"b146c37c-0473-4db8-a743-72a7576edf59\") " pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.125054 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwszt\" (UniqueName: \"kubernetes.io/projected/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-kube-api-access-fwszt\") pod \"barbican-api-6677fdc7d-j4bjx\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.125073 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-config-data-custom\") pod \"barbican-api-6677fdc7d-j4bjx\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.125154 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.125170 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.125181 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.125195 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.125585 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b146c37c-0473-4db8-a743-72a7576edf59-logs\") pod \"barbican-keystone-listener-655bd97bbb-6cj47\" (UID: \"b146c37c-0473-4db8-a743-72a7576edf59\") " pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.126030 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12450ab0-8804-4354-83ff-47ca9b58bcec-logs\") pod \"barbican-worker-8474589d6c-tbnqc\" (UID: \"12450ab0-8804-4354-83ff-47ca9b58bcec\") " pod="openstack/barbican-worker-8474589d6c-tbnqc" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.130131 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b146c37c-0473-4db8-a743-72a7576edf59-config-data-custom\") pod \"barbican-keystone-listener-655bd97bbb-6cj47\" (UID: \"b146c37c-0473-4db8-a743-72a7576edf59\") " pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.134652 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12450ab0-8804-4354-83ff-47ca9b58bcec-combined-ca-bundle\") pod \"barbican-worker-8474589d6c-tbnqc\" (UID: \"12450ab0-8804-4354-83ff-47ca9b58bcec\") " pod="openstack/barbican-worker-8474589d6c-tbnqc" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.135699 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-ld95m"] Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.135788 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2" (UID: "318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.152475 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b146c37c-0473-4db8-a743-72a7576edf59-combined-ca-bundle\") pod \"barbican-keystone-listener-655bd97bbb-6cj47\" (UID: \"b146c37c-0473-4db8-a743-72a7576edf59\") " pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.154352 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12450ab0-8804-4354-83ff-47ca9b58bcec-config-data\") pod \"barbican-worker-8474589d6c-tbnqc\" (UID: \"12450ab0-8804-4354-83ff-47ca9b58bcec\") " pod="openstack/barbican-worker-8474589d6c-tbnqc" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.154977 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12450ab0-8804-4354-83ff-47ca9b58bcec-config-data-custom\") pod \"barbican-worker-8474589d6c-tbnqc\" (UID: \"12450ab0-8804-4354-83ff-47ca9b58bcec\") " pod="openstack/barbican-worker-8474589d6c-tbnqc" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.156780 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldw9f\" (UniqueName: \"kubernetes.io/projected/12450ab0-8804-4354-83ff-47ca9b58bcec-kube-api-access-ldw9f\") pod \"barbican-worker-8474589d6c-tbnqc\" (UID: \"12450ab0-8804-4354-83ff-47ca9b58bcec\") " pod="openstack/barbican-worker-8474589d6c-tbnqc" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.159500 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b146c37c-0473-4db8-a743-72a7576edf59-config-data\") pod \"barbican-keystone-listener-655bd97bbb-6cj47\" (UID: \"b146c37c-0473-4db8-a743-72a7576edf59\") " pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.163228 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9f94\" (UniqueName: \"kubernetes.io/projected/b146c37c-0473-4db8-a743-72a7576edf59-kube-api-access-w9f94\") pod \"barbican-keystone-listener-655bd97bbb-6cj47\" (UID: \"b146c37c-0473-4db8-a743-72a7576edf59\") " pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.164158 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8474589d6c-tbnqc" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.227086 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-combined-ca-bundle\") pod \"barbican-api-6677fdc7d-j4bjx\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.227196 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwszt\" (UniqueName: \"kubernetes.io/projected/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-kube-api-access-fwszt\") pod \"barbican-api-6677fdc7d-j4bjx\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.227218 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-config-data-custom\") pod \"barbican-api-6677fdc7d-j4bjx\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.227309 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-config-data\") pod \"barbican-api-6677fdc7d-j4bjx\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.227389 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zrnrc" event={"ID":"a730641b-f7e3-4f6f-8aae-8acf92f37ca1","Type":"ContainerDied","Data":"d9fb76de9125c923237386a4a7b56a2ce2a6ab5db16b13e71f75d42c05e07922"} Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.227417 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-logs\") pod \"barbican-api-6677fdc7d-j4bjx\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.227812 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-logs\") pod \"barbican-api-6677fdc7d-j4bjx\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.226421 4718 generic.go:334] "Generic (PLEG): container finished" podID="a730641b-f7e3-4f6f-8aae-8acf92f37ca1" containerID="d9fb76de9125c923237386a4a7b56a2ce2a6ab5db16b13e71f75d42c05e07922" exitCode=0 Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.232859 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-config-data\") pod \"barbican-api-6677fdc7d-j4bjx\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.233619 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-config-data-custom\") pod \"barbican-api-6677fdc7d-j4bjx\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.235830 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.243380 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.244270 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6dffc5fb8-5997w" event={"ID":"4878ef2b-0a67-424e-95b7-53803746d9f3","Type":"ContainerStarted","Data":"1a120df5f2792b9d5077633edab64b898f8bc45743d9292ce04784c62352a8a6"} Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.245183 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-combined-ca-bundle\") pod \"barbican-api-6677fdc7d-j4bjx\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.255419 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwszt\" (UniqueName: \"kubernetes.io/projected/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-kube-api-access-fwszt\") pod \"barbican-api-6677fdc7d-j4bjx\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.273216 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" event={"ID":"318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2","Type":"ContainerDied","Data":"28a6a3fabf0cfd8b569a6b473c789ccbc13649c5412be464b5f4578e110d894c"} Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.273294 4718 scope.go:117] "RemoveContainer" containerID="3f51d37a677af322d8044a539b835c2187199fc463b531c7b5043dbfd8dd759c" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.273244 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-qlsl5" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.309411 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-qlsl5"] Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.318730 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-qlsl5"] Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.469701 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-859b68d8fd-fn26w"] Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.504686 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:01 crc kubenswrapper[4718]: I0123 16:38:01.518332 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5fd84bc878-plqxp"] Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.167370 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2" path="/var/lib/kubelet/pods/318cbfc2-1c16-40c7-b82a-9d81ac0b2ff2/volumes" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.175030 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5fd84bc878-plqxp"] Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.221715 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b4d87654d-p9q2p"] Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.223851 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.229611 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.232893 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.262658 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b4d87654d-p9q2p"] Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.329611 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mj4m\" (UniqueName: \"kubernetes.io/projected/d3d50a24-2b4e-43eb-ac1a-2807554f0989-kube-api-access-2mj4m\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.329707 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d50a24-2b4e-43eb-ac1a-2807554f0989-public-tls-certs\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.329821 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d50a24-2b4e-43eb-ac1a-2807554f0989-config-data\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.329861 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d50a24-2b4e-43eb-ac1a-2807554f0989-combined-ca-bundle\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.329917 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3d50a24-2b4e-43eb-ac1a-2807554f0989-config-data-custom\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.329941 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3d50a24-2b4e-43eb-ac1a-2807554f0989-logs\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.329969 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d50a24-2b4e-43eb-ac1a-2807554f0989-internal-tls-certs\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.434137 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d50a24-2b4e-43eb-ac1a-2807554f0989-config-data\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.434226 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d50a24-2b4e-43eb-ac1a-2807554f0989-combined-ca-bundle\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.434344 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3d50a24-2b4e-43eb-ac1a-2807554f0989-config-data-custom\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.434597 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3d50a24-2b4e-43eb-ac1a-2807554f0989-logs\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.434667 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d50a24-2b4e-43eb-ac1a-2807554f0989-internal-tls-certs\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.435872 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3d50a24-2b4e-43eb-ac1a-2807554f0989-logs\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.436207 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mj4m\" (UniqueName: \"kubernetes.io/projected/d3d50a24-2b4e-43eb-ac1a-2807554f0989-kube-api-access-2mj4m\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.436294 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d50a24-2b4e-43eb-ac1a-2807554f0989-public-tls-certs\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.444607 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d50a24-2b4e-43eb-ac1a-2807554f0989-combined-ca-bundle\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.448864 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3d50a24-2b4e-43eb-ac1a-2807554f0989-config-data-custom\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.459404 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d50a24-2b4e-43eb-ac1a-2807554f0989-public-tls-certs\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.459871 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d50a24-2b4e-43eb-ac1a-2807554f0989-internal-tls-certs\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.460296 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d50a24-2b4e-43eb-ac1a-2807554f0989-config-data\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.466669 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mj4m\" (UniqueName: \"kubernetes.io/projected/d3d50a24-2b4e-43eb-ac1a-2807554f0989-kube-api-access-2mj4m\") pod \"barbican-api-5b4d87654d-p9q2p\" (UID: \"d3d50a24-2b4e-43eb-ac1a-2807554f0989\") " pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:03 crc kubenswrapper[4718]: I0123 16:38:03.552314 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:04 crc kubenswrapper[4718]: W0123 16:38:04.920578 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7faa3883_3a27_4ef9_9da6_476f43ad53e0.slice/crio-e9e499ca6e2964adc5e9dd3bcddce7e920122053adbbee7abdce84ea25850860 WatchSource:0}: Error finding container e9e499ca6e2964adc5e9dd3bcddce7e920122053adbbee7abdce84ea25850860: Status 404 returned error can't find the container with id e9e499ca6e2964adc5e9dd3bcddce7e920122053adbbee7abdce84ea25850860 Jan 23 16:38:04 crc kubenswrapper[4718]: W0123 16:38:04.941946 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ab7c049_e49d_442e_b9e8_86cf1ac46a59.slice/crio-3dbc9942b4e9ef43a8e28ffdb1b3337107d0d213f9aeacf64207153d6d4b8e3f WatchSource:0}: Error finding container 3dbc9942b4e9ef43a8e28ffdb1b3337107d0d213f9aeacf64207153d6d4b8e3f: Status 404 returned error can't find the container with id 3dbc9942b4e9ef43a8e28ffdb1b3337107d0d213f9aeacf64207153d6d4b8e3f Jan 23 16:38:04 crc kubenswrapper[4718]: W0123 16:38:04.947957 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50e2741c_c631_42d1_bc2a_71292bbcfe61.slice/crio-888c744adf64ccc84a42f00cfc1f29f63c2b47b792834e03ba149db03071afc5 WatchSource:0}: Error finding container 888c744adf64ccc84a42f00cfc1f29f63c2b47b792834e03ba149db03071afc5: Status 404 returned error can't find the container with id 888c744adf64ccc84a42f00cfc1f29f63c2b47b792834e03ba149db03071afc5 Jan 23 16:38:04 crc kubenswrapper[4718]: I0123 16:38:04.963899 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 16:38:04 crc kubenswrapper[4718]: I0123 16:38:04.963965 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.027495 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.029304 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.123053 4718 scope.go:117] "RemoveContainer" containerID="928429bb3c74865f885d4de4c4a54e632febd49fe18c86078c4044af55f02198" Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.131222 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zrnrc" Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.192009 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a730641b-f7e3-4f6f-8aae-8acf92f37ca1-operator-scripts\") pod \"a730641b-f7e3-4f6f-8aae-8acf92f37ca1\" (UID: \"a730641b-f7e3-4f6f-8aae-8acf92f37ca1\") " Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.192669 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a730641b-f7e3-4f6f-8aae-8acf92f37ca1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a730641b-f7e3-4f6f-8aae-8acf92f37ca1" (UID: "a730641b-f7e3-4f6f-8aae-8acf92f37ca1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.193059 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz2sr\" (UniqueName: \"kubernetes.io/projected/a730641b-f7e3-4f6f-8aae-8acf92f37ca1-kube-api-access-hz2sr\") pod \"a730641b-f7e3-4f6f-8aae-8acf92f37ca1\" (UID: \"a730641b-f7e3-4f6f-8aae-8acf92f37ca1\") " Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.194228 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a730641b-f7e3-4f6f-8aae-8acf92f37ca1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.207395 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a730641b-f7e3-4f6f-8aae-8acf92f37ca1-kube-api-access-hz2sr" (OuterVolumeSpecName: "kube-api-access-hz2sr") pod "a730641b-f7e3-4f6f-8aae-8acf92f37ca1" (UID: "a730641b-f7e3-4f6f-8aae-8acf92f37ca1"). InnerVolumeSpecName "kube-api-access-hz2sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.296312 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz2sr\" (UniqueName: \"kubernetes.io/projected/a730641b-f7e3-4f6f-8aae-8acf92f37ca1-kube-api-access-hz2sr\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.372577 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78f48f99c9-tjmw2" event={"ID":"7faa3883-3a27-4ef9-9da6-476f43ad53e0","Type":"ContainerStarted","Data":"e9e499ca6e2964adc5e9dd3bcddce7e920122053adbbee7abdce84ea25850860"} Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.374849 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zrnrc" Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.376733 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zrnrc" event={"ID":"a730641b-f7e3-4f6f-8aae-8acf92f37ca1","Type":"ContainerDied","Data":"d2161b31ad280bfc1a7d3c524b64219d5699beb121cc4609c32939771c85ca6a"} Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.376770 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2161b31ad280bfc1a7d3c524b64219d5699beb121cc4609c32939771c85ca6a" Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.392931 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" event={"ID":"13e342b5-4486-4b97-8e64-d6a189164e51","Type":"ContainerStarted","Data":"00bddcb1497ed28f8079a7e40adea6c3fc53b6689f54eb4d2255d5599c48f165"} Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.398572 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" event={"ID":"50e2741c-c631-42d1-bc2a-71292bbcfe61","Type":"ContainerStarted","Data":"888c744adf64ccc84a42f00cfc1f29f63c2b47b792834e03ba149db03071afc5"} Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.416677 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fd84bc878-plqxp" event={"ID":"4ab7c049-e49d-442e-b9e8-86cf1ac46a59","Type":"ContainerStarted","Data":"3dbc9942b4e9ef43a8e28ffdb1b3337107d0d213f9aeacf64207153d6d4b8e3f"} Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.416957 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.416994 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.427012 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.427064 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.526470 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.527030 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 16:38:05 crc kubenswrapper[4718]: I0123 16:38:05.541512 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f6b6f5fd7-qpbqz"] Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.030161 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-655bd97bbb-6cj47"] Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.243989 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zrnrc"] Jan 23 16:38:06 crc kubenswrapper[4718]: W0123 16:38:06.261932 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12450ab0_8804_4354_83ff_47ca9b58bcec.slice/crio-28f10ff9f05202db79b3356bf67b57bb3a75e9bca9deb0f3fd4ede3a143033b1 WatchSource:0}: Error finding container 28f10ff9f05202db79b3356bf67b57bb3a75e9bca9deb0f3fd4ede3a143033b1: Status 404 returned error can't find the container with id 28f10ff9f05202db79b3356bf67b57bb3a75e9bca9deb0f3fd4ede3a143033b1 Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.267884 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zrnrc"] Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.287466 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8474589d6c-tbnqc"] Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.306137 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6677fdc7d-j4bjx"] Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.334512 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b4d87654d-p9q2p"] Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.436101 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d99c9a2d-5a68-454a-9516-24e28ef12bb5","Type":"ContainerStarted","Data":"6c7e2dbcc86b9e4b0be482b28470028317b187c9a81e8793ad2ae05c9610b1b4"} Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.439170 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6dffc5fb8-5997w" event={"ID":"4878ef2b-0a67-424e-95b7-53803746d9f3","Type":"ContainerStarted","Data":"99d81b1cc6cfcd3d5ae4543e949205ccd5c1182c1885148d4ed26cc05fbe0bc9"} Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.439254 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.439592 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.444010 4718 generic.go:334] "Generic (PLEG): container finished" podID="50e2741c-c631-42d1-bc2a-71292bbcfe61" containerID="91e20785aa4478e658e6b3ce9c61dd60a5405b430a5ab681c0dacea39dbc96c2" exitCode=0 Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.444123 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" event={"ID":"50e2741c-c631-42d1-bc2a-71292bbcfe61","Type":"ContainerDied","Data":"91e20785aa4478e658e6b3ce9c61dd60a5405b430a5ab681c0dacea39dbc96c2"} Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.447656 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" event={"ID":"b146c37c-0473-4db8-a743-72a7576edf59","Type":"ContainerStarted","Data":"3a9b273820ae8e2dd9e9b6a094608c6ba482af985db89fe04623288caa66e598"} Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.457058 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6677fdc7d-j4bjx" event={"ID":"06131a45-931f-4a9f-bb3f-a250ab6f5aaa","Type":"ContainerStarted","Data":"10aefa05da49a8857aa6de53c128ded1db1841f7f4c88c8a053f7301afe44341"} Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.464830 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8474589d6c-tbnqc" event={"ID":"12450ab0-8804-4354-83ff-47ca9b58bcec","Type":"ContainerStarted","Data":"28f10ff9f05202db79b3356bf67b57bb3a75e9bca9deb0f3fd4ede3a143033b1"} Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.478326 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fd84bc878-plqxp" event={"ID":"4ab7c049-e49d-442e-b9e8-86cf1ac46a59","Type":"ContainerStarted","Data":"43624438068a895ad3dc4f78c6a25eef96f7e3669965115ca7455c2a4ea87632"} Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.483528 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f6b6f5fd7-qpbqz" event={"ID":"e836bdf5-8379-4f60-8dbe-7be5381ed922","Type":"ContainerStarted","Data":"9d598a4882ffb18818e74f1c072e6424b7d4decbf2394020edb54bed2d964daa"} Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.488826 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b4d87654d-p9q2p" event={"ID":"d3d50a24-2b4e-43eb-ac1a-2807554f0989","Type":"ContainerStarted","Data":"9eecd0a6c4b17b399f1a1349f615930f16341677d2e166e0c8acf7379cb5f9ee"} Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.489711 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.490036 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 16:38:06 crc kubenswrapper[4718]: I0123 16:38:06.490249 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6dffc5fb8-5997w" podStartSLOduration=9.490216465 podStartE2EDuration="9.490216465s" podCreationTimestamp="2026-01-23 16:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:38:06.472959807 +0000 UTC m=+1287.620201798" watchObservedRunningTime="2026-01-23 16:38:06.490216465 +0000 UTC m=+1287.637458466" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.174648 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a730641b-f7e3-4f6f-8aae-8acf92f37ca1" path="/var/lib/kubelet/pods/a730641b-f7e3-4f6f-8aae-8acf92f37ca1/volumes" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.526740 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b4d87654d-p9q2p" event={"ID":"d3d50a24-2b4e-43eb-ac1a-2807554f0989","Type":"ContainerStarted","Data":"a14d312f5f6d06ddd6a208a9dca852c7811013c21c27585f0be4beda33d92f03"} Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.527248 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b4d87654d-p9q2p" event={"ID":"d3d50a24-2b4e-43eb-ac1a-2807554f0989","Type":"ContainerStarted","Data":"3f794f58cac082a20e5ef8a1bb9fbd1883af35ae99051a241fbf9f86d144d049"} Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.527305 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.527336 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.544280 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6677fdc7d-j4bjx" event={"ID":"06131a45-931f-4a9f-bb3f-a250ab6f5aaa","Type":"ContainerStarted","Data":"541f6992eb1b644c46e48bc087802cdbd1a354efc0082d32ff6319e93e2f6e36"} Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.544348 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6677fdc7d-j4bjx" event={"ID":"06131a45-931f-4a9f-bb3f-a250ab6f5aaa","Type":"ContainerStarted","Data":"b417a5e7c6132a9d7ecca7648d4ca7b1a373ffc16dbedc733c3273bd99b71efd"} Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.544369 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.544489 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.568389 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b4crv" event={"ID":"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1","Type":"ContainerStarted","Data":"7354c255cfcfe0b314bf46cf72227ebcdcc658d48d95d38ecc2feccfe4f50964"} Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.569772 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6sblf"] Jan 23 16:38:07 crc kubenswrapper[4718]: E0123 16:38:07.574368 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a730641b-f7e3-4f6f-8aae-8acf92f37ca1" containerName="mariadb-account-create-update" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.574406 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a730641b-f7e3-4f6f-8aae-8acf92f37ca1" containerName="mariadb-account-create-update" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.574752 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a730641b-f7e3-4f6f-8aae-8acf92f37ca1" containerName="mariadb-account-create-update" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.575829 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6sblf" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.581055 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.594014 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b4d87654d-p9q2p" podStartSLOduration=4.593981648 podStartE2EDuration="4.593981648s" podCreationTimestamp="2026-01-23 16:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:38:07.588087708 +0000 UTC m=+1288.735329699" watchObservedRunningTime="2026-01-23 16:38:07.593981648 +0000 UTC m=+1288.741223639" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.594574 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" event={"ID":"50e2741c-c631-42d1-bc2a-71292bbcfe61","Type":"ContainerStarted","Data":"63ae5cd414289f120bd643906abd81fcc995c7ab6486b15baf278693935e168e"} Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.594772 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.638249 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fd84bc878-plqxp" event={"ID":"4ab7c049-e49d-442e-b9e8-86cf1ac46a59","Type":"ContainerStarted","Data":"9e3102c97c00beb40eafe36a5c1fd44cdfee5c0aa62f3cafa148058426b4dbcd"} Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.638433 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5fd84bc878-plqxp" podUID="4ab7c049-e49d-442e-b9e8-86cf1ac46a59" containerName="barbican-api-log" containerID="cri-o://43624438068a895ad3dc4f78c6a25eef96f7e3669965115ca7455c2a4ea87632" gracePeriod=30 Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.638749 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.638777 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.638809 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5fd84bc878-plqxp" podUID="4ab7c049-e49d-442e-b9e8-86cf1ac46a59" containerName="barbican-api" containerID="cri-o://9e3102c97c00beb40eafe36a5c1fd44cdfee5c0aa62f3cafa148058426b4dbcd" gracePeriod=30 Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.666247 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6677fdc7d-j4bjx" podStartSLOduration=7.666226119 podStartE2EDuration="7.666226119s" podCreationTimestamp="2026-01-23 16:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:38:07.642111964 +0000 UTC m=+1288.789353955" watchObservedRunningTime="2026-01-23 16:38:07.666226119 +0000 UTC m=+1288.813468110" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.667239 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6sblf"] Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.670620 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f6b6f5fd7-qpbqz" event={"ID":"e836bdf5-8379-4f60-8dbe-7be5381ed922","Type":"ContainerStarted","Data":"0e0742f6b494d82b131a5d7c74255e8e431a21012b4724504ee96d4a3732b97d"} Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.670676 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.700481 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62q65\" (UniqueName: \"kubernetes.io/projected/af0639d3-936d-4904-beff-534d226294e8-kube-api-access-62q65\") pod \"root-account-create-update-6sblf\" (UID: \"af0639d3-936d-4904-beff-534d226294e8\") " pod="openstack/root-account-create-update-6sblf" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.700803 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af0639d3-936d-4904-beff-534d226294e8-operator-scripts\") pod \"root-account-create-update-6sblf\" (UID: \"af0639d3-936d-4904-beff-534d226294e8\") " pod="openstack/root-account-create-update-6sblf" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.719694 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-b4crv" podStartSLOduration=5.763622438 podStartE2EDuration="51.719672939s" podCreationTimestamp="2026-01-23 16:37:16 +0000 UTC" firstStartedPulling="2026-01-23 16:37:19.470838327 +0000 UTC m=+1240.618080318" lastFinishedPulling="2026-01-23 16:38:05.426888828 +0000 UTC m=+1286.574130819" observedRunningTime="2026-01-23 16:38:07.672945851 +0000 UTC m=+1288.820187842" watchObservedRunningTime="2026-01-23 16:38:07.719672939 +0000 UTC m=+1288.866914930" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.785848 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5fd84bc878-plqxp" podStartSLOduration=8.785824855 podStartE2EDuration="8.785824855s" podCreationTimestamp="2026-01-23 16:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:38:07.730087982 +0000 UTC m=+1288.877329973" watchObservedRunningTime="2026-01-23 16:38:07.785824855 +0000 UTC m=+1288.933066836" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.804174 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af0639d3-936d-4904-beff-534d226294e8-operator-scripts\") pod \"root-account-create-update-6sblf\" (UID: \"af0639d3-936d-4904-beff-534d226294e8\") " pod="openstack/root-account-create-update-6sblf" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.804562 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62q65\" (UniqueName: \"kubernetes.io/projected/af0639d3-936d-4904-beff-534d226294e8-kube-api-access-62q65\") pod \"root-account-create-update-6sblf\" (UID: \"af0639d3-936d-4904-beff-534d226294e8\") " pod="openstack/root-account-create-update-6sblf" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.806372 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" podStartSLOduration=8.806347821 podStartE2EDuration="8.806347821s" podCreationTimestamp="2026-01-23 16:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:38:07.764252749 +0000 UTC m=+1288.911494750" watchObservedRunningTime="2026-01-23 16:38:07.806347821 +0000 UTC m=+1288.953589802" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.814474 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af0639d3-936d-4904-beff-534d226294e8-operator-scripts\") pod \"root-account-create-update-6sblf\" (UID: \"af0639d3-936d-4904-beff-534d226294e8\") " pod="openstack/root-account-create-update-6sblf" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.821202 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-f6b6f5fd7-qpbqz" podStartSLOduration=7.821175433 podStartE2EDuration="7.821175433s" podCreationTimestamp="2026-01-23 16:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:38:07.794123339 +0000 UTC m=+1288.941365340" watchObservedRunningTime="2026-01-23 16:38:07.821175433 +0000 UTC m=+1288.968417424" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.839975 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62q65\" (UniqueName: \"kubernetes.io/projected/af0639d3-936d-4904-beff-534d226294e8-kube-api-access-62q65\") pod \"root-account-create-update-6sblf\" (UID: \"af0639d3-936d-4904-beff-534d226294e8\") " pod="openstack/root-account-create-update-6sblf" Jan 23 16:38:07 crc kubenswrapper[4718]: I0123 16:38:07.939644 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6sblf" Jan 23 16:38:08 crc kubenswrapper[4718]: I0123 16:38:08.540236 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:38:08 crc kubenswrapper[4718]: I0123 16:38:08.565969 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6sblf"] Jan 23 16:38:08 crc kubenswrapper[4718]: I0123 16:38:08.728448 4718 generic.go:334] "Generic (PLEG): container finished" podID="4ab7c049-e49d-442e-b9e8-86cf1ac46a59" containerID="9e3102c97c00beb40eafe36a5c1fd44cdfee5c0aa62f3cafa148058426b4dbcd" exitCode=0 Jan 23 16:38:08 crc kubenswrapper[4718]: I0123 16:38:08.728488 4718 generic.go:334] "Generic (PLEG): container finished" podID="4ab7c049-e49d-442e-b9e8-86cf1ac46a59" containerID="43624438068a895ad3dc4f78c6a25eef96f7e3669965115ca7455c2a4ea87632" exitCode=143 Jan 23 16:38:08 crc kubenswrapper[4718]: I0123 16:38:08.728552 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fd84bc878-plqxp" event={"ID":"4ab7c049-e49d-442e-b9e8-86cf1ac46a59","Type":"ContainerDied","Data":"9e3102c97c00beb40eafe36a5c1fd44cdfee5c0aa62f3cafa148058426b4dbcd"} Jan 23 16:38:08 crc kubenswrapper[4718]: I0123 16:38:08.728588 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fd84bc878-plqxp" event={"ID":"4ab7c049-e49d-442e-b9e8-86cf1ac46a59","Type":"ContainerDied","Data":"43624438068a895ad3dc4f78c6a25eef96f7e3669965115ca7455c2a4ea87632"} Jan 23 16:38:08 crc kubenswrapper[4718]: I0123 16:38:08.734283 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6sblf" event={"ID":"af0639d3-936d-4904-beff-534d226294e8","Type":"ContainerStarted","Data":"4254f5908cfe0677e7a2b83113663bde17318c9faabdacd2c84c6128907089c9"} Jan 23 16:38:09 crc kubenswrapper[4718]: I0123 16:38:09.397173 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 16:38:09 crc kubenswrapper[4718]: I0123 16:38:09.397770 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 16:38:09 crc kubenswrapper[4718]: I0123 16:38:09.469693 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 16:38:09 crc kubenswrapper[4718]: I0123 16:38:09.469868 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 16:38:09 crc kubenswrapper[4718]: I0123 16:38:09.476522 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 16:38:09 crc kubenswrapper[4718]: I0123 16:38:09.644805 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 16:38:09 crc kubenswrapper[4718]: I0123 16:38:09.868958 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:38:09 crc kubenswrapper[4718]: I0123 16:38:09.938320 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-config-data\") pod \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " Jan 23 16:38:09 crc kubenswrapper[4718]: I0123 16:38:09.938456 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-config-data-custom\") pod \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " Jan 23 16:38:09 crc kubenswrapper[4718]: I0123 16:38:09.938477 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-combined-ca-bundle\") pod \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " Jan 23 16:38:09 crc kubenswrapper[4718]: I0123 16:38:09.939393 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-logs\") pod \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " Jan 23 16:38:09 crc kubenswrapper[4718]: I0123 16:38:09.939460 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf5j9\" (UniqueName: \"kubernetes.io/projected/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-kube-api-access-bf5j9\") pod \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\" (UID: \"4ab7c049-e49d-442e-b9e8-86cf1ac46a59\") " Jan 23 16:38:09 crc kubenswrapper[4718]: I0123 16:38:09.941425 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-logs" (OuterVolumeSpecName: "logs") pod "4ab7c049-e49d-442e-b9e8-86cf1ac46a59" (UID: "4ab7c049-e49d-442e-b9e8-86cf1ac46a59"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:38:09 crc kubenswrapper[4718]: I0123 16:38:09.947809 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4ab7c049-e49d-442e-b9e8-86cf1ac46a59" (UID: "4ab7c049-e49d-442e-b9e8-86cf1ac46a59"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:09 crc kubenswrapper[4718]: I0123 16:38:09.949760 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-kube-api-access-bf5j9" (OuterVolumeSpecName: "kube-api-access-bf5j9") pod "4ab7c049-e49d-442e-b9e8-86cf1ac46a59" (UID: "4ab7c049-e49d-442e-b9e8-86cf1ac46a59"). InnerVolumeSpecName "kube-api-access-bf5j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.027280 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-config-data" (OuterVolumeSpecName: "config-data") pod "4ab7c049-e49d-442e-b9e8-86cf1ac46a59" (UID: "4ab7c049-e49d-442e-b9e8-86cf1ac46a59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.028750 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ab7c049-e49d-442e-b9e8-86cf1ac46a59" (UID: "4ab7c049-e49d-442e-b9e8-86cf1ac46a59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.043888 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.043939 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.043958 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.043975 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-logs\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.043988 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf5j9\" (UniqueName: \"kubernetes.io/projected/4ab7c049-e49d-442e-b9e8-86cf1ac46a59-kube-api-access-bf5j9\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.787538 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" event={"ID":"13e342b5-4486-4b97-8e64-d6a189164e51","Type":"ContainerStarted","Data":"03fc5bd2a6a4b9f99dc015ed34bf0078a5965512a869f1ca4a4ef87aaf4368c1"} Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.788472 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" event={"ID":"13e342b5-4486-4b97-8e64-d6a189164e51","Type":"ContainerStarted","Data":"8d06eaf6a30db9574ea4b92daad6e40594c9b4746c535755881b5a1a8c5bd46b"} Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.797155 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" event={"ID":"b146c37c-0473-4db8-a743-72a7576edf59","Type":"ContainerStarted","Data":"5655c8ab78c16fec5c436fc3b9fcb89efe423f8ceac5db7d8370e617ed724213"} Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.797194 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" event={"ID":"b146c37c-0473-4db8-a743-72a7576edf59","Type":"ContainerStarted","Data":"3ac8005ac3fa959c58ee5e5185b2c9f7a6110a8d479cee0bc8211f781ef3a822"} Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.808792 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fd84bc878-plqxp" event={"ID":"4ab7c049-e49d-442e-b9e8-86cf1ac46a59","Type":"ContainerDied","Data":"3dbc9942b4e9ef43a8e28ffdb1b3337107d0d213f9aeacf64207153d6d4b8e3f"} Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.808862 4718 scope.go:117] "RemoveContainer" containerID="9e3102c97c00beb40eafe36a5c1fd44cdfee5c0aa62f3cafa148058426b4dbcd" Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.809064 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fd84bc878-plqxp" Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.817870 4718 generic.go:334] "Generic (PLEG): container finished" podID="af0639d3-936d-4904-beff-534d226294e8" containerID="fe06aa906b6ed316e84c76663bba9b775bb3e1e469391938485f2ead3d54a80a" exitCode=0 Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.818421 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6sblf" event={"ID":"af0639d3-936d-4904-beff-534d226294e8","Type":"ContainerDied","Data":"fe06aa906b6ed316e84c76663bba9b775bb3e1e469391938485f2ead3d54a80a"} Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.828766 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" podStartSLOduration=6.867131586 podStartE2EDuration="11.828748971s" podCreationTimestamp="2026-01-23 16:37:59 +0000 UTC" firstStartedPulling="2026-01-23 16:38:04.95930402 +0000 UTC m=+1286.106546031" lastFinishedPulling="2026-01-23 16:38:09.920921425 +0000 UTC m=+1291.068163416" observedRunningTime="2026-01-23 16:38:10.822999245 +0000 UTC m=+1291.970241236" watchObservedRunningTime="2026-01-23 16:38:10.828748971 +0000 UTC m=+1291.975990962" Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.829619 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78f48f99c9-tjmw2" event={"ID":"7faa3883-3a27-4ef9-9da6-476f43ad53e0","Type":"ContainerStarted","Data":"41631682393d84d8a47c83a9e719f972c08836c1092c617cc9085470498ff6a0"} Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.839312 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8474589d6c-tbnqc" event={"ID":"12450ab0-8804-4354-83ff-47ca9b58bcec","Type":"ContainerStarted","Data":"9c90f1e346df5a4478c690e1c4869c311ea44cb227d9e813a611b60906ea1810"} Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.839503 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8474589d6c-tbnqc" event={"ID":"12450ab0-8804-4354-83ff-47ca9b58bcec","Type":"ContainerStarted","Data":"df09061cf3d7c13bc3f00b8a97a75f2d1f8e218bd2602d775624c58c9bc0951f"} Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.895840 4718 scope.go:117] "RemoveContainer" containerID="43624438068a895ad3dc4f78c6a25eef96f7e3669965115ca7455c2a4ea87632" Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.919179 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-655bd97bbb-6cj47" podStartSLOduration=7.070613226 podStartE2EDuration="10.919148634s" podCreationTimestamp="2026-01-23 16:38:00 +0000 UTC" firstStartedPulling="2026-01-23 16:38:06.079850909 +0000 UTC m=+1287.227092900" lastFinishedPulling="2026-01-23 16:38:09.928386317 +0000 UTC m=+1291.075628308" observedRunningTime="2026-01-23 16:38:10.866454524 +0000 UTC m=+1292.013696515" watchObservedRunningTime="2026-01-23 16:38:10.919148634 +0000 UTC m=+1292.066390625" Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.960554 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-78f48f99c9-tjmw2" podStartSLOduration=6.960051599 podStartE2EDuration="11.960503837s" podCreationTimestamp="2026-01-23 16:37:59 +0000 UTC" firstStartedPulling="2026-01-23 16:38:04.925218335 +0000 UTC m=+1286.072460326" lastFinishedPulling="2026-01-23 16:38:09.925670573 +0000 UTC m=+1291.072912564" observedRunningTime="2026-01-23 16:38:10.887334281 +0000 UTC m=+1292.034576272" watchObservedRunningTime="2026-01-23 16:38:10.960503837 +0000 UTC m=+1292.107745828" Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.981754 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-859b68d8fd-fn26w"] Jan 23 16:38:10 crc kubenswrapper[4718]: I0123 16:38:10.982942 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-8474589d6c-tbnqc" podStartSLOduration=7.330628382 podStartE2EDuration="10.982919235s" podCreationTimestamp="2026-01-23 16:38:00 +0000 UTC" firstStartedPulling="2026-01-23 16:38:06.272373353 +0000 UTC m=+1287.419615344" lastFinishedPulling="2026-01-23 16:38:09.924664216 +0000 UTC m=+1291.071906197" observedRunningTime="2026-01-23 16:38:10.91201848 +0000 UTC m=+1292.059260471" watchObservedRunningTime="2026-01-23 16:38:10.982919235 +0000 UTC m=+1292.130161226" Jan 23 16:38:11 crc kubenswrapper[4718]: I0123 16:38:11.004492 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5fd84bc878-plqxp"] Jan 23 16:38:11 crc kubenswrapper[4718]: I0123 16:38:11.013081 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5fd84bc878-plqxp"] Jan 23 16:38:11 crc kubenswrapper[4718]: I0123 16:38:11.024066 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-78f48f99c9-tjmw2"] Jan 23 16:38:11 crc kubenswrapper[4718]: I0123 16:38:11.168934 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ab7c049-e49d-442e-b9e8-86cf1ac46a59" path="/var/lib/kubelet/pods/4ab7c049-e49d-442e-b9e8-86cf1ac46a59/volumes" Jan 23 16:38:11 crc kubenswrapper[4718]: I0123 16:38:11.860670 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78f48f99c9-tjmw2" event={"ID":"7faa3883-3a27-4ef9-9da6-476f43ad53e0","Type":"ContainerStarted","Data":"0234150c1a9fb395f32836b9e738e868435a0f389fee04c20af4b8d3f6f3fa50"} Jan 23 16:38:12 crc kubenswrapper[4718]: I0123 16:38:12.392592 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6sblf" Jan 23 16:38:12 crc kubenswrapper[4718]: I0123 16:38:12.431417 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62q65\" (UniqueName: \"kubernetes.io/projected/af0639d3-936d-4904-beff-534d226294e8-kube-api-access-62q65\") pod \"af0639d3-936d-4904-beff-534d226294e8\" (UID: \"af0639d3-936d-4904-beff-534d226294e8\") " Jan 23 16:38:12 crc kubenswrapper[4718]: I0123 16:38:12.431719 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af0639d3-936d-4904-beff-534d226294e8-operator-scripts\") pod \"af0639d3-936d-4904-beff-534d226294e8\" (UID: \"af0639d3-936d-4904-beff-534d226294e8\") " Jan 23 16:38:12 crc kubenswrapper[4718]: I0123 16:38:12.433468 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af0639d3-936d-4904-beff-534d226294e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af0639d3-936d-4904-beff-534d226294e8" (UID: "af0639d3-936d-4904-beff-534d226294e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:12 crc kubenswrapper[4718]: I0123 16:38:12.442292 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af0639d3-936d-4904-beff-534d226294e8-kube-api-access-62q65" (OuterVolumeSpecName: "kube-api-access-62q65") pod "af0639d3-936d-4904-beff-534d226294e8" (UID: "af0639d3-936d-4904-beff-534d226294e8"). InnerVolumeSpecName "kube-api-access-62q65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:12 crc kubenswrapper[4718]: I0123 16:38:12.536242 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af0639d3-936d-4904-beff-534d226294e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:12 crc kubenswrapper[4718]: I0123 16:38:12.536296 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62q65\" (UniqueName: \"kubernetes.io/projected/af0639d3-936d-4904-beff-534d226294e8-kube-api-access-62q65\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:12 crc kubenswrapper[4718]: I0123 16:38:12.894427 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-78f48f99c9-tjmw2" podUID="7faa3883-3a27-4ef9-9da6-476f43ad53e0" containerName="barbican-worker-log" containerID="cri-o://41631682393d84d8a47c83a9e719f972c08836c1092c617cc9085470498ff6a0" gracePeriod=30 Jan 23 16:38:12 crc kubenswrapper[4718]: I0123 16:38:12.894829 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6sblf" Jan 23 16:38:12 crc kubenswrapper[4718]: I0123 16:38:12.895342 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6sblf" event={"ID":"af0639d3-936d-4904-beff-534d226294e8","Type":"ContainerDied","Data":"4254f5908cfe0677e7a2b83113663bde17318c9faabdacd2c84c6128907089c9"} Jan 23 16:38:12 crc kubenswrapper[4718]: I0123 16:38:12.895370 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4254f5908cfe0677e7a2b83113663bde17318c9faabdacd2c84c6128907089c9" Jan 23 16:38:12 crc kubenswrapper[4718]: I0123 16:38:12.895497 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" podUID="13e342b5-4486-4b97-8e64-d6a189164e51" containerName="barbican-keystone-listener-log" containerID="cri-o://8d06eaf6a30db9574ea4b92daad6e40594c9b4746c535755881b5a1a8c5bd46b" gracePeriod=30 Jan 23 16:38:12 crc kubenswrapper[4718]: I0123 16:38:12.895550 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" podUID="13e342b5-4486-4b97-8e64-d6a189164e51" containerName="barbican-keystone-listener" containerID="cri-o://03fc5bd2a6a4b9f99dc015ed34bf0078a5965512a869f1ca4a4ef87aaf4368c1" gracePeriod=30 Jan 23 16:38:12 crc kubenswrapper[4718]: I0123 16:38:12.895598 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-78f48f99c9-tjmw2" podUID="7faa3883-3a27-4ef9-9da6-476f43ad53e0" containerName="barbican-worker" containerID="cri-o://0234150c1a9fb395f32836b9e738e868435a0f389fee04c20af4b8d3f6f3fa50" gracePeriod=30 Jan 23 16:38:13 crc kubenswrapper[4718]: I0123 16:38:13.603383 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:13 crc kubenswrapper[4718]: I0123 16:38:13.918805 4718 generic.go:334] "Generic (PLEG): container finished" podID="7faa3883-3a27-4ef9-9da6-476f43ad53e0" containerID="0234150c1a9fb395f32836b9e738e868435a0f389fee04c20af4b8d3f6f3fa50" exitCode=0 Jan 23 16:38:13 crc kubenswrapper[4718]: I0123 16:38:13.919269 4718 generic.go:334] "Generic (PLEG): container finished" podID="7faa3883-3a27-4ef9-9da6-476f43ad53e0" containerID="41631682393d84d8a47c83a9e719f972c08836c1092c617cc9085470498ff6a0" exitCode=143 Jan 23 16:38:13 crc kubenswrapper[4718]: I0123 16:38:13.918893 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78f48f99c9-tjmw2" event={"ID":"7faa3883-3a27-4ef9-9da6-476f43ad53e0","Type":"ContainerDied","Data":"0234150c1a9fb395f32836b9e738e868435a0f389fee04c20af4b8d3f6f3fa50"} Jan 23 16:38:13 crc kubenswrapper[4718]: I0123 16:38:13.919377 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78f48f99c9-tjmw2" event={"ID":"7faa3883-3a27-4ef9-9da6-476f43ad53e0","Type":"ContainerDied","Data":"41631682393d84d8a47c83a9e719f972c08836c1092c617cc9085470498ff6a0"} Jan 23 16:38:13 crc kubenswrapper[4718]: I0123 16:38:13.923286 4718 generic.go:334] "Generic (PLEG): container finished" podID="0bd0aeb0-24b5-49c2-96cd-83f9defa05e1" containerID="7354c255cfcfe0b314bf46cf72227ebcdcc658d48d95d38ecc2feccfe4f50964" exitCode=0 Jan 23 16:38:13 crc kubenswrapper[4718]: I0123 16:38:13.923368 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b4crv" event={"ID":"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1","Type":"ContainerDied","Data":"7354c255cfcfe0b314bf46cf72227ebcdcc658d48d95d38ecc2feccfe4f50964"} Jan 23 16:38:13 crc kubenswrapper[4718]: I0123 16:38:13.925311 4718 generic.go:334] "Generic (PLEG): container finished" podID="13e342b5-4486-4b97-8e64-d6a189164e51" containerID="03fc5bd2a6a4b9f99dc015ed34bf0078a5965512a869f1ca4a4ef87aaf4368c1" exitCode=0 Jan 23 16:38:13 crc kubenswrapper[4718]: I0123 16:38:13.925345 4718 generic.go:334] "Generic (PLEG): container finished" podID="13e342b5-4486-4b97-8e64-d6a189164e51" containerID="8d06eaf6a30db9574ea4b92daad6e40594c9b4746c535755881b5a1a8c5bd46b" exitCode=143 Jan 23 16:38:13 crc kubenswrapper[4718]: I0123 16:38:13.925389 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" event={"ID":"13e342b5-4486-4b97-8e64-d6a189164e51","Type":"ContainerDied","Data":"03fc5bd2a6a4b9f99dc015ed34bf0078a5965512a869f1ca4a4ef87aaf4368c1"} Jan 23 16:38:13 crc kubenswrapper[4718]: I0123 16:38:13.925419 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" event={"ID":"13e342b5-4486-4b97-8e64-d6a189164e51","Type":"ContainerDied","Data":"8d06eaf6a30db9574ea4b92daad6e40594c9b4746c535755881b5a1a8c5bd46b"} Jan 23 16:38:14 crc kubenswrapper[4718]: I0123 16:38:14.832085 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:38:14 crc kubenswrapper[4718]: I0123 16:38:14.942075 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-2kxbx"] Jan 23 16:38:14 crc kubenswrapper[4718]: I0123 16:38:14.942358 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" podUID="1d0a620d-3a8d-418e-85d1-f0be169a3d48" containerName="dnsmasq-dns" containerID="cri-o://0f58d8ab7425a08edd31c56f4cde058daa496f5badbf94fa61c6ffa9300a8e84" gracePeriod=10 Jan 23 16:38:15 crc kubenswrapper[4718]: I0123 16:38:15.164001 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:15 crc kubenswrapper[4718]: I0123 16:38:15.273346 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b4d87654d-p9q2p" Jan 23 16:38:15 crc kubenswrapper[4718]: I0123 16:38:15.365911 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6677fdc7d-j4bjx"] Jan 23 16:38:15 crc kubenswrapper[4718]: I0123 16:38:15.366192 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6677fdc7d-j4bjx" podUID="06131a45-931f-4a9f-bb3f-a250ab6f5aaa" containerName="barbican-api-log" containerID="cri-o://b417a5e7c6132a9d7ecca7648d4ca7b1a373ffc16dbedc733c3273bd99b71efd" gracePeriod=30 Jan 23 16:38:15 crc kubenswrapper[4718]: I0123 16:38:15.368499 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6677fdc7d-j4bjx" podUID="06131a45-931f-4a9f-bb3f-a250ab6f5aaa" containerName="barbican-api" containerID="cri-o://541f6992eb1b644c46e48bc087802cdbd1a354efc0082d32ff6319e93e2f6e36" gracePeriod=30 Jan 23 16:38:15 crc kubenswrapper[4718]: I0123 16:38:15.389023 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6677fdc7d-j4bjx" podUID="06131a45-931f-4a9f-bb3f-a250ab6f5aaa" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.209:9311/healthcheck\": EOF" Jan 23 16:38:15 crc kubenswrapper[4718]: I0123 16:38:15.963692 4718 generic.go:334] "Generic (PLEG): container finished" podID="06131a45-931f-4a9f-bb3f-a250ab6f5aaa" containerID="b417a5e7c6132a9d7ecca7648d4ca7b1a373ffc16dbedc733c3273bd99b71efd" exitCode=143 Jan 23 16:38:15 crc kubenswrapper[4718]: I0123 16:38:15.963759 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6677fdc7d-j4bjx" event={"ID":"06131a45-931f-4a9f-bb3f-a250ab6f5aaa","Type":"ContainerDied","Data":"b417a5e7c6132a9d7ecca7648d4ca7b1a373ffc16dbedc733c3273bd99b71efd"} Jan 23 16:38:15 crc kubenswrapper[4718]: I0123 16:38:15.968727 4718 generic.go:334] "Generic (PLEG): container finished" podID="1d0a620d-3a8d-418e-85d1-f0be169a3d48" containerID="0f58d8ab7425a08edd31c56f4cde058daa496f5badbf94fa61c6ffa9300a8e84" exitCode=0 Jan 23 16:38:15 crc kubenswrapper[4718]: I0123 16:38:15.968823 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" event={"ID":"1d0a620d-3a8d-418e-85d1-f0be169a3d48","Type":"ContainerDied","Data":"0f58d8ab7425a08edd31c56f4cde058daa496f5badbf94fa61c6ffa9300a8e84"} Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.285283 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6sblf"] Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.294899 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6sblf"] Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.827171 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b4crv" Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.883075 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-config-data\") pod \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.883153 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-db-sync-config-data\") pod \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.883198 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-etc-machine-id\") pod \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.883231 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxc7b\" (UniqueName: \"kubernetes.io/projected/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-kube-api-access-lxc7b\") pod \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.883623 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-scripts\") pod \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.883700 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-combined-ca-bundle\") pod \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\" (UID: \"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1\") " Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.883909 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0bd0aeb0-24b5-49c2-96cd-83f9defa05e1" (UID: "0bd0aeb0-24b5-49c2-96cd-83f9defa05e1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.884921 4718 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.906235 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-kube-api-access-lxc7b" (OuterVolumeSpecName: "kube-api-access-lxc7b") pod "0bd0aeb0-24b5-49c2-96cd-83f9defa05e1" (UID: "0bd0aeb0-24b5-49c2-96cd-83f9defa05e1"). InnerVolumeSpecName "kube-api-access-lxc7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.916844 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-scripts" (OuterVolumeSpecName: "scripts") pod "0bd0aeb0-24b5-49c2-96cd-83f9defa05e1" (UID: "0bd0aeb0-24b5-49c2-96cd-83f9defa05e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.919546 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0bd0aeb0-24b5-49c2-96cd-83f9defa05e1" (UID: "0bd0aeb0-24b5-49c2-96cd-83f9defa05e1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.947479 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bd0aeb0-24b5-49c2-96cd-83f9defa05e1" (UID: "0bd0aeb0-24b5-49c2-96cd-83f9defa05e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.994001 4718 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.994036 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxc7b\" (UniqueName: \"kubernetes.io/projected/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-kube-api-access-lxc7b\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.994053 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:16 crc kubenswrapper[4718]: I0123 16:38:16.994065 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.002840 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-config-data" (OuterVolumeSpecName: "config-data") pod "0bd0aeb0-24b5-49c2-96cd-83f9defa05e1" (UID: "0bd0aeb0-24b5-49c2-96cd-83f9defa05e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.016429 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b4crv" event={"ID":"0bd0aeb0-24b5-49c2-96cd-83f9defa05e1","Type":"ContainerDied","Data":"e0debcd7ae658e71a04915c0b43ace8edd448a28ddc3fc0e6b2625c9fd71c5de"} Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.016504 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0debcd7ae658e71a04915c0b43ace8edd448a28ddc3fc0e6b2625c9fd71c5de" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.016609 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b4crv" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.096571 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.154464 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af0639d3-936d-4904-beff-534d226294e8" path="/var/lib/kubelet/pods/af0639d3-936d-4904-beff-534d226294e8/volumes" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.401074 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.637757 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-z8mq2"] Jan 23 16:38:17 crc kubenswrapper[4718]: E0123 16:38:17.638542 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd0aeb0-24b5-49c2-96cd-83f9defa05e1" containerName="cinder-db-sync" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.638560 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd0aeb0-24b5-49c2-96cd-83f9defa05e1" containerName="cinder-db-sync" Jan 23 16:38:17 crc kubenswrapper[4718]: E0123 16:38:17.638578 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af0639d3-936d-4904-beff-534d226294e8" containerName="mariadb-account-create-update" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.638585 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="af0639d3-936d-4904-beff-534d226294e8" containerName="mariadb-account-create-update" Jan 23 16:38:17 crc kubenswrapper[4718]: E0123 16:38:17.638620 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab7c049-e49d-442e-b9e8-86cf1ac46a59" containerName="barbican-api" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.638650 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab7c049-e49d-442e-b9e8-86cf1ac46a59" containerName="barbican-api" Jan 23 16:38:17 crc kubenswrapper[4718]: E0123 16:38:17.638671 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab7c049-e49d-442e-b9e8-86cf1ac46a59" containerName="barbican-api-log" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.638678 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab7c049-e49d-442e-b9e8-86cf1ac46a59" containerName="barbican-api-log" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.638949 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="af0639d3-936d-4904-beff-534d226294e8" containerName="mariadb-account-create-update" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.638980 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab7c049-e49d-442e-b9e8-86cf1ac46a59" containerName="barbican-api-log" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.638998 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd0aeb0-24b5-49c2-96cd-83f9defa05e1" containerName="cinder-db-sync" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.639010 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab7c049-e49d-442e-b9e8-86cf1ac46a59" containerName="barbican-api" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.640120 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z8mq2" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.643103 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.662815 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-z8mq2"] Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.698386 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7cf46bcc5-bh2nc"] Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.698787 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7cf46bcc5-bh2nc" podUID="21c0d3dd-fddc-4460-9bf6-89df19751954" containerName="neutron-httpd" containerID="cri-o://920a7b4d135e2096ac3e52cd9501b1a325c99bb36c07452d45b94db6e0e34762" gracePeriod=30 Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.698707 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7cf46bcc5-bh2nc" podUID="21c0d3dd-fddc-4460-9bf6-89df19751954" containerName="neutron-api" containerID="cri-o://625e60bdbbdf2daf1ab19a225b42fccb5ba9ac6267a59094c4e7027ce71bece4" gracePeriod=30 Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.720469 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtqjw\" (UniqueName: \"kubernetes.io/projected/b9e39eff-698f-4561-bfa8-4a28b17e4559-kube-api-access-wtqjw\") pod \"root-account-create-update-z8mq2\" (UID: \"b9e39eff-698f-4561-bfa8-4a28b17e4559\") " pod="openstack/root-account-create-update-z8mq2" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.727364 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e39eff-698f-4561-bfa8-4a28b17e4559-operator-scripts\") pod \"root-account-create-update-z8mq2\" (UID: \"b9e39eff-698f-4561-bfa8-4a28b17e4559\") " pod="openstack/root-account-create-update-z8mq2" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.737708 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8567b78dd5-chd6w"] Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.740351 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.748849 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8567b78dd5-chd6w"] Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.815720 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7cf46bcc5-bh2nc" podUID="21c0d3dd-fddc-4460-9bf6-89df19751954" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.197:9696/\": read tcp 10.217.0.2:36438->10.217.0.197:9696: read: connection reset by peer" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.830648 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c367121-318c-413c-96e5-f53a105d91d3-httpd-config\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.830834 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c367121-318c-413c-96e5-f53a105d91d3-public-tls-certs\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.830963 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtqjw\" (UniqueName: \"kubernetes.io/projected/b9e39eff-698f-4561-bfa8-4a28b17e4559-kube-api-access-wtqjw\") pod \"root-account-create-update-z8mq2\" (UID: \"b9e39eff-698f-4561-bfa8-4a28b17e4559\") " pod="openstack/root-account-create-update-z8mq2" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.831340 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c367121-318c-413c-96e5-f53a105d91d3-config\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.831613 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c367121-318c-413c-96e5-f53a105d91d3-ovndb-tls-certs\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.831810 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e39eff-698f-4561-bfa8-4a28b17e4559-operator-scripts\") pod \"root-account-create-update-z8mq2\" (UID: \"b9e39eff-698f-4561-bfa8-4a28b17e4559\") " pod="openstack/root-account-create-update-z8mq2" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.831903 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c367121-318c-413c-96e5-f53a105d91d3-combined-ca-bundle\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.831978 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6c7z\" (UniqueName: \"kubernetes.io/projected/5c367121-318c-413c-96e5-f53a105d91d3-kube-api-access-d6c7z\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.832100 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c367121-318c-413c-96e5-f53a105d91d3-internal-tls-certs\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.833388 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e39eff-698f-4561-bfa8-4a28b17e4559-operator-scripts\") pod \"root-account-create-update-z8mq2\" (UID: \"b9e39eff-698f-4561-bfa8-4a28b17e4559\") " pod="openstack/root-account-create-update-z8mq2" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.857681 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtqjw\" (UniqueName: \"kubernetes.io/projected/b9e39eff-698f-4561-bfa8-4a28b17e4559-kube-api-access-wtqjw\") pod \"root-account-create-update-z8mq2\" (UID: \"b9e39eff-698f-4561-bfa8-4a28b17e4559\") " pod="openstack/root-account-create-update-z8mq2" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.934755 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6c7z\" (UniqueName: \"kubernetes.io/projected/5c367121-318c-413c-96e5-f53a105d91d3-kube-api-access-d6c7z\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.934844 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c367121-318c-413c-96e5-f53a105d91d3-internal-tls-certs\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.934909 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c367121-318c-413c-96e5-f53a105d91d3-httpd-config\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.934937 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c367121-318c-413c-96e5-f53a105d91d3-public-tls-certs\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.935012 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c367121-318c-413c-96e5-f53a105d91d3-config\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.935094 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c367121-318c-413c-96e5-f53a105d91d3-ovndb-tls-certs\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.935305 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c367121-318c-413c-96e5-f53a105d91d3-combined-ca-bundle\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.940058 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c367121-318c-413c-96e5-f53a105d91d3-combined-ca-bundle\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.940735 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c367121-318c-413c-96e5-f53a105d91d3-config\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.946230 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c367121-318c-413c-96e5-f53a105d91d3-httpd-config\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.947071 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c367121-318c-413c-96e5-f53a105d91d3-ovndb-tls-certs\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.950911 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c367121-318c-413c-96e5-f53a105d91d3-public-tls-certs\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.957717 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6c7z\" (UniqueName: \"kubernetes.io/projected/5c367121-318c-413c-96e5-f53a105d91d3-kube-api-access-d6c7z\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.959125 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c367121-318c-413c-96e5-f53a105d91d3-internal-tls-certs\") pod \"neutron-8567b78dd5-chd6w\" (UID: \"5c367121-318c-413c-96e5-f53a105d91d3\") " pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:17 crc kubenswrapper[4718]: I0123 16:38:17.974718 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z8mq2" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.052837 4718 generic.go:334] "Generic (PLEG): container finished" podID="21c0d3dd-fddc-4460-9bf6-89df19751954" containerID="920a7b4d135e2096ac3e52cd9501b1a325c99bb36c07452d45b94db6e0e34762" exitCode=0 Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.052919 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cf46bcc5-bh2nc" event={"ID":"21c0d3dd-fddc-4460-9bf6-89df19751954","Type":"ContainerDied","Data":"920a7b4d135e2096ac3e52cd9501b1a325c99bb36c07452d45b94db6e0e34762"} Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.078473 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.087780 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.089833 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.098521 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.098772 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kw66w" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.098930 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.100201 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.105595 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.229031 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.229150 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.229347 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-scripts\") pod \"cinder-scheduler-0\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.229922 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-config-data\") pod \"cinder-scheduler-0\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.229957 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.230503 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-292d4\" (UniqueName: \"kubernetes.io/projected/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-kube-api-access-292d4\") pod \"cinder-scheduler-0\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.268551 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-w4hbk"] Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.271330 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.285219 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-w4hbk"] Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.332859 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-scripts\") pod \"cinder-scheduler-0\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.332993 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-config-data\") pod \"cinder-scheduler-0\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.333016 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.333061 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-w4hbk\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.333095 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-dns-svc\") pod \"dnsmasq-dns-6578955fd5-w4hbk\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.333114 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-w4hbk\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.333143 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-292d4\" (UniqueName: \"kubernetes.io/projected/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-kube-api-access-292d4\") pod \"cinder-scheduler-0\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.333180 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-w4hbk\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.333214 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.333231 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jkf9\" (UniqueName: \"kubernetes.io/projected/b2153bde-84f6-45c0-9b35-e7b4943cbcee-kube-api-access-8jkf9\") pod \"dnsmasq-dns-6578955fd5-w4hbk\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.333249 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-config\") pod \"dnsmasq-dns-6578955fd5-w4hbk\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.333267 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.335473 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.339317 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.340268 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-scripts\") pod \"cinder-scheduler-0\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.345371 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.348604 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.352301 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.355418 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-config-data\") pod \"cinder-scheduler-0\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.356267 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.359014 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-292d4\" (UniqueName: \"kubernetes.io/projected/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-kube-api-access-292d4\") pod \"cinder-scheduler-0\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.364400 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.437084 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-w4hbk\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.437152 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-dns-svc\") pod \"dnsmasq-dns-6578955fd5-w4hbk\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.437180 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-w4hbk\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.437224 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-config-data\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.437250 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-config-data-custom\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.437279 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-w4hbk\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.437311 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29e5d971-73b4-4847-a650-4de4832ffdd6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.437342 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jkf9\" (UniqueName: \"kubernetes.io/projected/b2153bde-84f6-45c0-9b35-e7b4943cbcee-kube-api-access-8jkf9\") pod \"dnsmasq-dns-6578955fd5-w4hbk\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.437359 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e5d971-73b4-4847-a650-4de4832ffdd6-logs\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.437388 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-config\") pod \"dnsmasq-dns-6578955fd5-w4hbk\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.437414 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.437435 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln89p\" (UniqueName: \"kubernetes.io/projected/29e5d971-73b4-4847-a650-4de4832ffdd6-kube-api-access-ln89p\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.437465 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-scripts\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.438266 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-w4hbk\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.438894 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-w4hbk\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.439067 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-w4hbk\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.439529 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-dns-svc\") pod \"dnsmasq-dns-6578955fd5-w4hbk\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.444089 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-config\") pod \"dnsmasq-dns-6578955fd5-w4hbk\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.460139 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jkf9\" (UniqueName: \"kubernetes.io/projected/b2153bde-84f6-45c0-9b35-e7b4943cbcee-kube-api-access-8jkf9\") pod \"dnsmasq-dns-6578955fd5-w4hbk\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.486621 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.540573 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-config-data\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.540660 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-config-data-custom\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.540698 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29e5d971-73b4-4847-a650-4de4832ffdd6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.540734 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e5d971-73b4-4847-a650-4de4832ffdd6-logs\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.540765 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.540788 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln89p\" (UniqueName: \"kubernetes.io/projected/29e5d971-73b4-4847-a650-4de4832ffdd6-kube-api-access-ln89p\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.540812 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-scripts\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.541322 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29e5d971-73b4-4847-a650-4de4832ffdd6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.542043 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e5d971-73b4-4847-a650-4de4832ffdd6-logs\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.545772 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-config-data-custom\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.546753 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-config-data\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.547447 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-scripts\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.547892 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.562278 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln89p\" (UniqueName: \"kubernetes.io/projected/29e5d971-73b4-4847-a650-4de4832ffdd6-kube-api-access-ln89p\") pod \"cinder-api-0\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.613203 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.736576 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.826335 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6677fdc7d-j4bjx" podUID="06131a45-931f-4a9f-bb3f-a250ab6f5aaa" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.209:9311/healthcheck\": read tcp 10.217.0.2:41448->10.217.0.209:9311: read: connection reset by peer" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.826966 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6677fdc7d-j4bjx" podUID="06131a45-931f-4a9f-bb3f-a250ab6f5aaa" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.209:9311/healthcheck\": dial tcp 10.217.0.209:9311: connect: connection refused" Jan 23 16:38:18 crc kubenswrapper[4718]: I0123 16:38:18.827080 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6677fdc7d-j4bjx" podUID="06131a45-931f-4a9f-bb3f-a250ab6f5aaa" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.209:9311/healthcheck\": read tcp 10.217.0.2:49188->10.217.0.209:9311: read: connection reset by peer" Jan 23 16:38:19 crc kubenswrapper[4718]: I0123 16:38:19.074174 4718 generic.go:334] "Generic (PLEG): container finished" podID="06131a45-931f-4a9f-bb3f-a250ab6f5aaa" containerID="541f6992eb1b644c46e48bc087802cdbd1a354efc0082d32ff6319e93e2f6e36" exitCode=0 Jan 23 16:38:19 crc kubenswrapper[4718]: I0123 16:38:19.074234 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6677fdc7d-j4bjx" event={"ID":"06131a45-931f-4a9f-bb3f-a250ab6f5aaa","Type":"ContainerDied","Data":"541f6992eb1b644c46e48bc087802cdbd1a354efc0082d32ff6319e93e2f6e36"} Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.110454 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" event={"ID":"1d0a620d-3a8d-418e-85d1-f0be169a3d48","Type":"ContainerDied","Data":"49f4cc5f9b62ed798ffcf456f23460c5d21206d7460b41d2aae53ebf5c5b93f5"} Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.111231 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49f4cc5f9b62ed798ffcf456f23460c5d21206d7460b41d2aae53ebf5c5b93f5" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.122113 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78f48f99c9-tjmw2" event={"ID":"7faa3883-3a27-4ef9-9da6-476f43ad53e0","Type":"ContainerDied","Data":"e9e499ca6e2964adc5e9dd3bcddce7e920122053adbbee7abdce84ea25850860"} Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.122159 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9e499ca6e2964adc5e9dd3bcddce7e920122053adbbee7abdce84ea25850860" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.124679 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" event={"ID":"13e342b5-4486-4b97-8e64-d6a189164e51","Type":"ContainerDied","Data":"00bddcb1497ed28f8079a7e40adea6c3fc53b6689f54eb4d2255d5599c48f165"} Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.124717 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00bddcb1497ed28f8079a7e40adea6c3fc53b6689f54eb4d2255d5599c48f165" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.270412 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.293889 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.296090 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.395089 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-ovsdbserver-sb\") pod \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.395595 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e342b5-4486-4b97-8e64-d6a189164e51-config-data\") pod \"13e342b5-4486-4b97-8e64-d6a189164e51\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.395649 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-config-data-custom\") pod \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.395681 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13e342b5-4486-4b97-8e64-d6a189164e51-logs\") pod \"13e342b5-4486-4b97-8e64-d6a189164e51\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.395725 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e342b5-4486-4b97-8e64-d6a189164e51-combined-ca-bundle\") pod \"13e342b5-4486-4b97-8e64-d6a189164e51\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.395766 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-combined-ca-bundle\") pod \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.395825 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-config\") pod \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.395894 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-config-data\") pod \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.395919 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-dns-swift-storage-0\") pod \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.395968 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-dns-svc\") pod \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.396036 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv4jg\" (UniqueName: \"kubernetes.io/projected/13e342b5-4486-4b97-8e64-d6a189164e51-kube-api-access-kv4jg\") pod \"13e342b5-4486-4b97-8e64-d6a189164e51\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.396070 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5qct\" (UniqueName: \"kubernetes.io/projected/7faa3883-3a27-4ef9-9da6-476f43ad53e0-kube-api-access-w5qct\") pod \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.396111 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7faa3883-3a27-4ef9-9da6-476f43ad53e0-logs\") pod \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.396171 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdwpw\" (UniqueName: \"kubernetes.io/projected/1d0a620d-3a8d-418e-85d1-f0be169a3d48-kube-api-access-hdwpw\") pod \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.396196 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13e342b5-4486-4b97-8e64-d6a189164e51-config-data-custom\") pod \"13e342b5-4486-4b97-8e64-d6a189164e51\" (UID: \"13e342b5-4486-4b97-8e64-d6a189164e51\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.396229 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-ovsdbserver-nb\") pod \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\" (UID: \"1d0a620d-3a8d-418e-85d1-f0be169a3d48\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.400833 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13e342b5-4486-4b97-8e64-d6a189164e51-logs" (OuterVolumeSpecName: "logs") pod "13e342b5-4486-4b97-8e64-d6a189164e51" (UID: "13e342b5-4486-4b97-8e64-d6a189164e51"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.416796 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7faa3883-3a27-4ef9-9da6-476f43ad53e0-logs" (OuterVolumeSpecName: "logs") pod "7faa3883-3a27-4ef9-9da6-476f43ad53e0" (UID: "7faa3883-3a27-4ef9-9da6-476f43ad53e0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.419664 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7faa3883-3a27-4ef9-9da6-476f43ad53e0-kube-api-access-w5qct" (OuterVolumeSpecName: "kube-api-access-w5qct") pod "7faa3883-3a27-4ef9-9da6-476f43ad53e0" (UID: "7faa3883-3a27-4ef9-9da6-476f43ad53e0"). InnerVolumeSpecName "kube-api-access-w5qct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.430672 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7faa3883-3a27-4ef9-9da6-476f43ad53e0" (UID: "7faa3883-3a27-4ef9-9da6-476f43ad53e0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: E0123 16:38:20.437996 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="d99c9a2d-5a68-454a-9516-24e28ef12bb5" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.447668 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d0a620d-3a8d-418e-85d1-f0be169a3d48-kube-api-access-hdwpw" (OuterVolumeSpecName: "kube-api-access-hdwpw") pod "1d0a620d-3a8d-418e-85d1-f0be169a3d48" (UID: "1d0a620d-3a8d-418e-85d1-f0be169a3d48"). InnerVolumeSpecName "kube-api-access-hdwpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.447821 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13e342b5-4486-4b97-8e64-d6a189164e51-kube-api-access-kv4jg" (OuterVolumeSpecName: "kube-api-access-kv4jg") pod "13e342b5-4486-4b97-8e64-d6a189164e51" (UID: "13e342b5-4486-4b97-8e64-d6a189164e51"). InnerVolumeSpecName "kube-api-access-kv4jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.457821 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13e342b5-4486-4b97-8e64-d6a189164e51-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "13e342b5-4486-4b97-8e64-d6a189164e51" (UID: "13e342b5-4486-4b97-8e64-d6a189164e51"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.489759 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.500528 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.500819 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13e342b5-4486-4b97-8e64-d6a189164e51-logs\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.500882 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv4jg\" (UniqueName: \"kubernetes.io/projected/13e342b5-4486-4b97-8e64-d6a189164e51-kube-api-access-kv4jg\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.500941 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5qct\" (UniqueName: \"kubernetes.io/projected/7faa3883-3a27-4ef9-9da6-476f43ad53e0-kube-api-access-w5qct\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.501018 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7faa3883-3a27-4ef9-9da6-476f43ad53e0-logs\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.501082 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdwpw\" (UniqueName: \"kubernetes.io/projected/1d0a620d-3a8d-418e-85d1-f0be169a3d48-kube-api-access-hdwpw\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.501139 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13e342b5-4486-4b97-8e64-d6a189164e51-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.557872 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d0a620d-3a8d-418e-85d1-f0be169a3d48" (UID: "1d0a620d-3a8d-418e-85d1-f0be169a3d48"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.567194 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13e342b5-4486-4b97-8e64-d6a189164e51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13e342b5-4486-4b97-8e64-d6a189164e51" (UID: "13e342b5-4486-4b97-8e64-d6a189164e51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.577918 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-config-data" (OuterVolumeSpecName: "config-data") pod "7faa3883-3a27-4ef9-9da6-476f43ad53e0" (UID: "7faa3883-3a27-4ef9-9da6-476f43ad53e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.600206 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13e342b5-4486-4b97-8e64-d6a189164e51-config-data" (OuterVolumeSpecName: "config-data") pod "13e342b5-4486-4b97-8e64-d6a189164e51" (UID: "13e342b5-4486-4b97-8e64-d6a189164e51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.601759 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7faa3883-3a27-4ef9-9da6-476f43ad53e0" (UID: "7faa3883-3a27-4ef9-9da6-476f43ad53e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.603140 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-config-data\") pod \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.603203 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwszt\" (UniqueName: \"kubernetes.io/projected/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-kube-api-access-fwszt\") pod \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.603275 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-config-data-custom\") pod \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.603824 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-combined-ca-bundle\") pod \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\" (UID: \"7faa3883-3a27-4ef9-9da6-476f43ad53e0\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.603914 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-logs\") pod \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.604081 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-combined-ca-bundle\") pod \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\" (UID: \"06131a45-931f-4a9f-bb3f-a250ab6f5aaa\") " Jan 23 16:38:20 crc kubenswrapper[4718]: W0123 16:38:20.605041 4718 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7faa3883-3a27-4ef9-9da6-476f43ad53e0/volumes/kubernetes.io~secret/combined-ca-bundle Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.605096 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7faa3883-3a27-4ef9-9da6-476f43ad53e0" (UID: "7faa3883-3a27-4ef9-9da6-476f43ad53e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.605126 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.605150 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.605164 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e342b5-4486-4b97-8e64-d6a189164e51-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.605177 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e342b5-4486-4b97-8e64-d6a189164e51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.605188 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7faa3883-3a27-4ef9-9da6-476f43ad53e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.605608 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-logs" (OuterVolumeSpecName: "logs") pod "06131a45-931f-4a9f-bb3f-a250ab6f5aaa" (UID: "06131a45-931f-4a9f-bb3f-a250ab6f5aaa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.609050 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1d0a620d-3a8d-418e-85d1-f0be169a3d48" (UID: "1d0a620d-3a8d-418e-85d1-f0be169a3d48"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.619315 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "06131a45-931f-4a9f-bb3f-a250ab6f5aaa" (UID: "06131a45-931f-4a9f-bb3f-a250ab6f5aaa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.617613 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-kube-api-access-fwszt" (OuterVolumeSpecName: "kube-api-access-fwszt") pod "06131a45-931f-4a9f-bb3f-a250ab6f5aaa" (UID: "06131a45-931f-4a9f-bb3f-a250ab6f5aaa"). InnerVolumeSpecName "kube-api-access-fwszt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.639043 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d0a620d-3a8d-418e-85d1-f0be169a3d48" (UID: "1d0a620d-3a8d-418e-85d1-f0be169a3d48"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.639489 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d0a620d-3a8d-418e-85d1-f0be169a3d48" (UID: "1d0a620d-3a8d-418e-85d1-f0be169a3d48"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.654077 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06131a45-931f-4a9f-bb3f-a250ab6f5aaa" (UID: "06131a45-931f-4a9f-bb3f-a250ab6f5aaa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.655170 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-config" (OuterVolumeSpecName: "config") pod "1d0a620d-3a8d-418e-85d1-f0be169a3d48" (UID: "1d0a620d-3a8d-418e-85d1-f0be169a3d48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.708522 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwszt\" (UniqueName: \"kubernetes.io/projected/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-kube-api-access-fwszt\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.711089 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.711104 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.711114 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.711128 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-logs\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.711137 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.711147 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d0a620d-3a8d-418e-85d1-f0be169a3d48-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.711155 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.717622 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-config-data" (OuterVolumeSpecName: "config-data") pod "06131a45-931f-4a9f-bb3f-a250ab6f5aaa" (UID: "06131a45-931f-4a9f-bb3f-a250ab6f5aaa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.796384 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.814510 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06131a45-931f-4a9f-bb3f-a250ab6f5aaa-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.935206 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-z8mq2"] Jan 23 16:38:20 crc kubenswrapper[4718]: I0123 16:38:20.945106 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.180270 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d99c9a2d-5a68-454a-9516-24e28ef12bb5" containerName="proxy-httpd" containerID="cri-o://7cb63ef046d3b54e142c3383d97a0ae84bd88fea50a18fca37fd000a9b359223" gracePeriod=30 Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.180408 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d99c9a2d-5a68-454a-9516-24e28ef12bb5" containerName="sg-core" containerID="cri-o://6c7e2dbcc86b9e4b0be482b28470028317b187c9a81e8793ad2ae05c9610b1b4" gracePeriod=30 Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.180460 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d99c9a2d-5a68-454a-9516-24e28ef12bb5" containerName="ceilometer-notification-agent" containerID="cri-o://1e18f3e22b1e55ac01afe5a1095f50f76452ed771766c2719f494564316c91b6" gracePeriod=30 Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.185335 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d99c9a2d-5a68-454a-9516-24e28ef12bb5","Type":"ContainerStarted","Data":"7cb63ef046d3b54e142c3383d97a0ae84bd88fea50a18fca37fd000a9b359223"} Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.185384 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.198746 4718 generic.go:334] "Generic (PLEG): container finished" podID="21c0d3dd-fddc-4460-9bf6-89df19751954" containerID="625e60bdbbdf2daf1ab19a225b42fccb5ba9ac6267a59094c4e7027ce71bece4" exitCode=0 Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.198858 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cf46bcc5-bh2nc" event={"ID":"21c0d3dd-fddc-4460-9bf6-89df19751954","Type":"ContainerDied","Data":"625e60bdbbdf2daf1ab19a225b42fccb5ba9ac6267a59094c4e7027ce71bece4"} Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.215918 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.232576 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6677fdc7d-j4bjx" event={"ID":"06131a45-931f-4a9f-bb3f-a250ab6f5aaa","Type":"ContainerDied","Data":"10aefa05da49a8857aa6de53c128ded1db1841f7f4c88c8a053f7301afe44341"} Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.233216 4718 scope.go:117] "RemoveContainer" containerID="541f6992eb1b644c46e48bc087802cdbd1a354efc0082d32ff6319e93e2f6e36" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.233301 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6677fdc7d-j4bjx" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.244553 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-w4hbk"] Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.245315 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z8mq2" event={"ID":"b9e39eff-698f-4561-bfa8-4a28b17e4559","Type":"ContainerStarted","Data":"4ba271f1b27c4a21747534055586d45a72c8d789b64bf30c182164de22fb9464"} Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.253278 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.258205 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535","Type":"ContainerStarted","Data":"9859d5edfecb927be1e584a992643ef6f92d7a4edcdf41c328ab7c17a6dbc96d"} Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.258435 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-859b68d8fd-fn26w" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.258764 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78f48f99c9-tjmw2" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.335518 4718 scope.go:117] "RemoveContainer" containerID="b417a5e7c6132a9d7ecca7648d4ca7b1a373ffc16dbedc733c3273bd99b71efd" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.369816 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.407854 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-2kxbx"] Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.439420 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-public-tls-certs\") pod \"21c0d3dd-fddc-4460-9bf6-89df19751954\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.439617 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-combined-ca-bundle\") pod \"21c0d3dd-fddc-4460-9bf6-89df19751954\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.439730 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-config\") pod \"21c0d3dd-fddc-4460-9bf6-89df19751954\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.439770 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-httpd-config\") pod \"21c0d3dd-fddc-4460-9bf6-89df19751954\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.439923 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-internal-tls-certs\") pod \"21c0d3dd-fddc-4460-9bf6-89df19751954\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.440049 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frjk2\" (UniqueName: \"kubernetes.io/projected/21c0d3dd-fddc-4460-9bf6-89df19751954-kube-api-access-frjk2\") pod \"21c0d3dd-fddc-4460-9bf6-89df19751954\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.440120 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-ovndb-tls-certs\") pod \"21c0d3dd-fddc-4460-9bf6-89df19751954\" (UID: \"21c0d3dd-fddc-4460-9bf6-89df19751954\") " Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.450082 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21c0d3dd-fddc-4460-9bf6-89df19751954-kube-api-access-frjk2" (OuterVolumeSpecName: "kube-api-access-frjk2") pod "21c0d3dd-fddc-4460-9bf6-89df19751954" (UID: "21c0d3dd-fddc-4460-9bf6-89df19751954"). InnerVolumeSpecName "kube-api-access-frjk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.451669 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-2kxbx"] Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.455400 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "21c0d3dd-fddc-4460-9bf6-89df19751954" (UID: "21c0d3dd-fddc-4460-9bf6-89df19751954"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.463226 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6677fdc7d-j4bjx"] Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.481370 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6677fdc7d-j4bjx"] Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.500474 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8567b78dd5-chd6w"] Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.527335 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-78f48f99c9-tjmw2"] Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.542194 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-78f48f99c9-tjmw2"] Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.548580 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frjk2\" (UniqueName: \"kubernetes.io/projected/21c0d3dd-fddc-4460-9bf6-89df19751954-kube-api-access-frjk2\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.548618 4718 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.555980 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21c0d3dd-fddc-4460-9bf6-89df19751954" (UID: "21c0d3dd-fddc-4460-9bf6-89df19751954"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.576050 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-859b68d8fd-fn26w"] Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.591885 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-859b68d8fd-fn26w"] Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.597858 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "21c0d3dd-fddc-4460-9bf6-89df19751954" (UID: "21c0d3dd-fddc-4460-9bf6-89df19751954"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.626987 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-config" (OuterVolumeSpecName: "config") pod "21c0d3dd-fddc-4460-9bf6-89df19751954" (UID: "21c0d3dd-fddc-4460-9bf6-89df19751954"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.640080 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "21c0d3dd-fddc-4460-9bf6-89df19751954" (UID: "21c0d3dd-fddc-4460-9bf6-89df19751954"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.654434 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.654477 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.654492 4718 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.654507 4718 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.690169 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "21c0d3dd-fddc-4460-9bf6-89df19751954" (UID: "21c0d3dd-fddc-4460-9bf6-89df19751954"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:21 crc kubenswrapper[4718]: I0123 16:38:21.757816 4718 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c0d3dd-fddc-4460-9bf6-89df19751954-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.282533 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29e5d971-73b4-4847-a650-4de4832ffdd6","Type":"ContainerStarted","Data":"ff94ae833b7ed06f3892f28025598035bf83d5fae32541285d25f4e5a42a084e"} Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.283061 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29e5d971-73b4-4847-a650-4de4832ffdd6","Type":"ContainerStarted","Data":"942c7fd6ce679a7e62ec996a662aa38d6be37e10151cab9fdd03ea4040ce1148"} Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.292130 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8567b78dd5-chd6w" event={"ID":"5c367121-318c-413c-96e5-f53a105d91d3","Type":"ContainerStarted","Data":"9b1e2c13aa184f2adbf3baa68cbd419fdd09bc4356f7accde67f57bd690a5eff"} Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.292196 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8567b78dd5-chd6w" event={"ID":"5c367121-318c-413c-96e5-f53a105d91d3","Type":"ContainerStarted","Data":"807c39928bef5364ad683c31ed93effcc80737183c232ff37970ef6555b0a51e"} Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.296525 4718 generic.go:334] "Generic (PLEG): container finished" podID="d99c9a2d-5a68-454a-9516-24e28ef12bb5" containerID="7cb63ef046d3b54e142c3383d97a0ae84bd88fea50a18fca37fd000a9b359223" exitCode=0 Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.296584 4718 generic.go:334] "Generic (PLEG): container finished" podID="d99c9a2d-5a68-454a-9516-24e28ef12bb5" containerID="6c7e2dbcc86b9e4b0be482b28470028317b187c9a81e8793ad2ae05c9610b1b4" exitCode=2 Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.296643 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d99c9a2d-5a68-454a-9516-24e28ef12bb5","Type":"ContainerDied","Data":"7cb63ef046d3b54e142c3383d97a0ae84bd88fea50a18fca37fd000a9b359223"} Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.296721 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d99c9a2d-5a68-454a-9516-24e28ef12bb5","Type":"ContainerDied","Data":"6c7e2dbcc86b9e4b0be482b28470028317b187c9a81e8793ad2ae05c9610b1b4"} Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.299884 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cf46bcc5-bh2nc" event={"ID":"21c0d3dd-fddc-4460-9bf6-89df19751954","Type":"ContainerDied","Data":"3063cc15463493693c745a846d696eac94ec513d31a7949f097c5f9ff60e0b9a"} Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.299959 4718 scope.go:117] "RemoveContainer" containerID="920a7b4d135e2096ac3e52cd9501b1a325c99bb36c07452d45b94db6e0e34762" Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.299900 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cf46bcc5-bh2nc" Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.306450 4718 generic.go:334] "Generic (PLEG): container finished" podID="b2153bde-84f6-45c0-9b35-e7b4943cbcee" containerID="3a0886b2966b22ce24c80e8dd6dc18afe1ae7176611864ce2e51ddce4717a14e" exitCode=0 Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.306533 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" event={"ID":"b2153bde-84f6-45c0-9b35-e7b4943cbcee","Type":"ContainerDied","Data":"3a0886b2966b22ce24c80e8dd6dc18afe1ae7176611864ce2e51ddce4717a14e"} Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.306574 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" event={"ID":"b2153bde-84f6-45c0-9b35-e7b4943cbcee","Type":"ContainerStarted","Data":"4a64815e27dd895d98b6c353dca2dd9c41cb450524ee9961278eda0c42157463"} Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.315752 4718 generic.go:334] "Generic (PLEG): container finished" podID="b9e39eff-698f-4561-bfa8-4a28b17e4559" containerID="d42542d148e6b6999321a32ed3614146e64ca825b9a35f49b99cda3f13668efd" exitCode=0 Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.315827 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z8mq2" event={"ID":"b9e39eff-698f-4561-bfa8-4a28b17e4559","Type":"ContainerDied","Data":"d42542d148e6b6999321a32ed3614146e64ca825b9a35f49b99cda3f13668efd"} Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.386384 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7cf46bcc5-bh2nc"] Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.393742 4718 scope.go:117] "RemoveContainer" containerID="625e60bdbbdf2daf1ab19a225b42fccb5ba9ac6267a59094c4e7027ce71bece4" Jan 23 16:38:22 crc kubenswrapper[4718]: I0123 16:38:22.396409 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7cf46bcc5-bh2nc"] Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.163620 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06131a45-931f-4a9f-bb3f-a250ab6f5aaa" path="/var/lib/kubelet/pods/06131a45-931f-4a9f-bb3f-a250ab6f5aaa/volumes" Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.165794 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13e342b5-4486-4b97-8e64-d6a189164e51" path="/var/lib/kubelet/pods/13e342b5-4486-4b97-8e64-d6a189164e51/volumes" Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.166579 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d0a620d-3a8d-418e-85d1-f0be169a3d48" path="/var/lib/kubelet/pods/1d0a620d-3a8d-418e-85d1-f0be169a3d48/volumes" Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.169593 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21c0d3dd-fddc-4460-9bf6-89df19751954" path="/var/lib/kubelet/pods/21c0d3dd-fddc-4460-9bf6-89df19751954/volumes" Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.170914 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7faa3883-3a27-4ef9-9da6-476f43ad53e0" path="/var/lib/kubelet/pods/7faa3883-3a27-4ef9-9da6-476f43ad53e0/volumes" Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.330874 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" event={"ID":"b2153bde-84f6-45c0-9b35-e7b4943cbcee","Type":"ContainerStarted","Data":"2d5fbf3a343b8f4233eb653866d1e5dc200f2cbdeb4aa20acae03845c6e095ed"} Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.331099 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.334999 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29e5d971-73b4-4847-a650-4de4832ffdd6","Type":"ContainerStarted","Data":"6f0f4f12cfcefeb95c63a439038340192c6d6ae42a05837cbd0bfcaee252585f"} Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.335100 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="29e5d971-73b4-4847-a650-4de4832ffdd6" containerName="cinder-api-log" containerID="cri-o://ff94ae833b7ed06f3892f28025598035bf83d5fae32541285d25f4e5a42a084e" gracePeriod=30 Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.335153 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="29e5d971-73b4-4847-a650-4de4832ffdd6" containerName="cinder-api" containerID="cri-o://6f0f4f12cfcefeb95c63a439038340192c6d6ae42a05837cbd0bfcaee252585f" gracePeriod=30 Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.335171 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.337680 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535","Type":"ContainerStarted","Data":"007cd94357f3d9f181dde407729511b7c9dea9929675039e104fe8845fccad25"} Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.337782 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535","Type":"ContainerStarted","Data":"6bc00aeef6027ccb83d10d3a69bb83ba9595dda4ec9a3c92a55987fffec045c2"} Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.341817 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8567b78dd5-chd6w" event={"ID":"5c367121-318c-413c-96e5-f53a105d91d3","Type":"ContainerStarted","Data":"65ebb705bc078c808db8aa158828fedf84a2f4f04d942dc79c14426a173393de"} Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.362115 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" podStartSLOduration=5.362090872 podStartE2EDuration="5.362090872s" podCreationTimestamp="2026-01-23 16:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:38:23.352384218 +0000 UTC m=+1304.499626219" watchObservedRunningTime="2026-01-23 16:38:23.362090872 +0000 UTC m=+1304.509332863" Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.386838 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.386808063 podStartE2EDuration="5.386808063s" podCreationTimestamp="2026-01-23 16:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:38:23.373876021 +0000 UTC m=+1304.521118022" watchObservedRunningTime="2026-01-23 16:38:23.386808063 +0000 UTC m=+1304.534050054" Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.422774 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8567b78dd5-chd6w" podStartSLOduration=6.422739227 podStartE2EDuration="6.422739227s" podCreationTimestamp="2026-01-23 16:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:38:23.391956392 +0000 UTC m=+1304.539198413" watchObservedRunningTime="2026-01-23 16:38:23.422739227 +0000 UTC m=+1304.569981228" Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.470533 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.630075746 podStartE2EDuration="5.470503753s" podCreationTimestamp="2026-01-23 16:38:18 +0000 UTC" firstStartedPulling="2026-01-23 16:38:20.946772687 +0000 UTC m=+1302.094014678" lastFinishedPulling="2026-01-23 16:38:21.787200694 +0000 UTC m=+1302.934442685" observedRunningTime="2026-01-23 16:38:23.419809758 +0000 UTC m=+1304.567051769" watchObservedRunningTime="2026-01-23 16:38:23.470503753 +0000 UTC m=+1304.617745744" Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.492761 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 16:38:23 crc kubenswrapper[4718]: I0123 16:38:23.958550 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z8mq2" Jan 23 16:38:24 crc kubenswrapper[4718]: I0123 16:38:24.041150 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtqjw\" (UniqueName: \"kubernetes.io/projected/b9e39eff-698f-4561-bfa8-4a28b17e4559-kube-api-access-wtqjw\") pod \"b9e39eff-698f-4561-bfa8-4a28b17e4559\" (UID: \"b9e39eff-698f-4561-bfa8-4a28b17e4559\") " Jan 23 16:38:24 crc kubenswrapper[4718]: I0123 16:38:24.041472 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e39eff-698f-4561-bfa8-4a28b17e4559-operator-scripts\") pod \"b9e39eff-698f-4561-bfa8-4a28b17e4559\" (UID: \"b9e39eff-698f-4561-bfa8-4a28b17e4559\") " Jan 23 16:38:24 crc kubenswrapper[4718]: I0123 16:38:24.044261 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9e39eff-698f-4561-bfa8-4a28b17e4559-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9e39eff-698f-4561-bfa8-4a28b17e4559" (UID: "b9e39eff-698f-4561-bfa8-4a28b17e4559"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:24 crc kubenswrapper[4718]: I0123 16:38:24.063484 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9e39eff-698f-4561-bfa8-4a28b17e4559-kube-api-access-wtqjw" (OuterVolumeSpecName: "kube-api-access-wtqjw") pod "b9e39eff-698f-4561-bfa8-4a28b17e4559" (UID: "b9e39eff-698f-4561-bfa8-4a28b17e4559"). InnerVolumeSpecName "kube-api-access-wtqjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:24 crc kubenswrapper[4718]: I0123 16:38:24.147100 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtqjw\" (UniqueName: \"kubernetes.io/projected/b9e39eff-698f-4561-bfa8-4a28b17e4559-kube-api-access-wtqjw\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:24 crc kubenswrapper[4718]: I0123 16:38:24.147154 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e39eff-698f-4561-bfa8-4a28b17e4559-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:24 crc kubenswrapper[4718]: I0123 16:38:24.361358 4718 generic.go:334] "Generic (PLEG): container finished" podID="29e5d971-73b4-4847-a650-4de4832ffdd6" containerID="ff94ae833b7ed06f3892f28025598035bf83d5fae32541285d25f4e5a42a084e" exitCode=143 Jan 23 16:38:24 crc kubenswrapper[4718]: I0123 16:38:24.361712 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29e5d971-73b4-4847-a650-4de4832ffdd6","Type":"ContainerDied","Data":"ff94ae833b7ed06f3892f28025598035bf83d5fae32541285d25f4e5a42a084e"} Jan 23 16:38:24 crc kubenswrapper[4718]: I0123 16:38:24.364359 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z8mq2" event={"ID":"b9e39eff-698f-4561-bfa8-4a28b17e4559","Type":"ContainerDied","Data":"4ba271f1b27c4a21747534055586d45a72c8d789b64bf30c182164de22fb9464"} Jan 23 16:38:24 crc kubenswrapper[4718]: I0123 16:38:24.364406 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ba271f1b27c4a21747534055586d45a72c8d789b64bf30c182164de22fb9464" Jan 23 16:38:24 crc kubenswrapper[4718]: I0123 16:38:24.364927 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:24 crc kubenswrapper[4718]: I0123 16:38:24.365695 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z8mq2" Jan 23 16:38:24 crc kubenswrapper[4718]: I0123 16:38:24.607385 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-2kxbx" podUID="1d0a620d-3a8d-418e-85d1-f0be169a3d48" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.189:5353: i/o timeout" Jan 23 16:38:26 crc kubenswrapper[4718]: I0123 16:38:26.388198 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-z8mq2"] Jan 23 16:38:26 crc kubenswrapper[4718]: I0123 16:38:26.405523 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-z8mq2"] Jan 23 16:38:26 crc kubenswrapper[4718]: I0123 16:38:26.994911 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.035558 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-combined-ca-bundle\") pod \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.036028 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-scripts\") pod \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.036238 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d99c9a2d-5a68-454a-9516-24e28ef12bb5-run-httpd\") pod \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.036443 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-config-data\") pod \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.036712 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d99c9a2d-5a68-454a-9516-24e28ef12bb5-log-httpd\") pod \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.036886 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmmgw\" (UniqueName: \"kubernetes.io/projected/d99c9a2d-5a68-454a-9516-24e28ef12bb5-kube-api-access-cmmgw\") pod \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.037109 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-sg-core-conf-yaml\") pod \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\" (UID: \"d99c9a2d-5a68-454a-9516-24e28ef12bb5\") " Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.036798 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d99c9a2d-5a68-454a-9516-24e28ef12bb5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d99c9a2d-5a68-454a-9516-24e28ef12bb5" (UID: "d99c9a2d-5a68-454a-9516-24e28ef12bb5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.037303 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d99c9a2d-5a68-454a-9516-24e28ef12bb5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d99c9a2d-5a68-454a-9516-24e28ef12bb5" (UID: "d99c9a2d-5a68-454a-9516-24e28ef12bb5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.038947 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d99c9a2d-5a68-454a-9516-24e28ef12bb5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.039184 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d99c9a2d-5a68-454a-9516-24e28ef12bb5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.045236 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d99c9a2d-5a68-454a-9516-24e28ef12bb5-kube-api-access-cmmgw" (OuterVolumeSpecName: "kube-api-access-cmmgw") pod "d99c9a2d-5a68-454a-9516-24e28ef12bb5" (UID: "d99c9a2d-5a68-454a-9516-24e28ef12bb5"). InnerVolumeSpecName "kube-api-access-cmmgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.061939 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-scripts" (OuterVolumeSpecName: "scripts") pod "d99c9a2d-5a68-454a-9516-24e28ef12bb5" (UID: "d99c9a2d-5a68-454a-9516-24e28ef12bb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.071379 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d99c9a2d-5a68-454a-9516-24e28ef12bb5" (UID: "d99c9a2d-5a68-454a-9516-24e28ef12bb5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.124779 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d99c9a2d-5a68-454a-9516-24e28ef12bb5" (UID: "d99c9a2d-5a68-454a-9516-24e28ef12bb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.141889 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.142023 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.142040 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmmgw\" (UniqueName: \"kubernetes.io/projected/d99c9a2d-5a68-454a-9516-24e28ef12bb5-kube-api-access-cmmgw\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.142067 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.147944 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-config-data" (OuterVolumeSpecName: "config-data") pod "d99c9a2d-5a68-454a-9516-24e28ef12bb5" (UID: "d99c9a2d-5a68-454a-9516-24e28ef12bb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.158317 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9e39eff-698f-4561-bfa8-4a28b17e4559" path="/var/lib/kubelet/pods/b9e39eff-698f-4561-bfa8-4a28b17e4559/volumes" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.245830 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d99c9a2d-5a68-454a-9516-24e28ef12bb5-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.420330 4718 generic.go:334] "Generic (PLEG): container finished" podID="d99c9a2d-5a68-454a-9516-24e28ef12bb5" containerID="1e18f3e22b1e55ac01afe5a1095f50f76452ed771766c2719f494564316c91b6" exitCode=0 Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.420387 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d99c9a2d-5a68-454a-9516-24e28ef12bb5","Type":"ContainerDied","Data":"1e18f3e22b1e55ac01afe5a1095f50f76452ed771766c2719f494564316c91b6"} Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.420421 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.420449 4718 scope.go:117] "RemoveContainer" containerID="7cb63ef046d3b54e142c3383d97a0ae84bd88fea50a18fca37fd000a9b359223" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.420433 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d99c9a2d-5a68-454a-9516-24e28ef12bb5","Type":"ContainerDied","Data":"aae0a33dbf1850e5000edda2c105909deaa6a8689bb6ce667d6edcb01a7cbd94"} Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.471322 4718 scope.go:117] "RemoveContainer" containerID="6c7e2dbcc86b9e4b0be482b28470028317b187c9a81e8793ad2ae05c9610b1b4" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.542235 4718 scope.go:117] "RemoveContainer" containerID="1e18f3e22b1e55ac01afe5a1095f50f76452ed771766c2719f494564316c91b6" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.546678 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.568604 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.572779 4718 scope.go:117] "RemoveContainer" containerID="7cb63ef046d3b54e142c3383d97a0ae84bd88fea50a18fca37fd000a9b359223" Jan 23 16:38:27 crc kubenswrapper[4718]: E0123 16:38:27.573363 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cb63ef046d3b54e142c3383d97a0ae84bd88fea50a18fca37fd000a9b359223\": container with ID starting with 7cb63ef046d3b54e142c3383d97a0ae84bd88fea50a18fca37fd000a9b359223 not found: ID does not exist" containerID="7cb63ef046d3b54e142c3383d97a0ae84bd88fea50a18fca37fd000a9b359223" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.573423 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cb63ef046d3b54e142c3383d97a0ae84bd88fea50a18fca37fd000a9b359223"} err="failed to get container status \"7cb63ef046d3b54e142c3383d97a0ae84bd88fea50a18fca37fd000a9b359223\": rpc error: code = NotFound desc = could not find container \"7cb63ef046d3b54e142c3383d97a0ae84bd88fea50a18fca37fd000a9b359223\": container with ID starting with 7cb63ef046d3b54e142c3383d97a0ae84bd88fea50a18fca37fd000a9b359223 not found: ID does not exist" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.573460 4718 scope.go:117] "RemoveContainer" containerID="6c7e2dbcc86b9e4b0be482b28470028317b187c9a81e8793ad2ae05c9610b1b4" Jan 23 16:38:27 crc kubenswrapper[4718]: E0123 16:38:27.573784 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c7e2dbcc86b9e4b0be482b28470028317b187c9a81e8793ad2ae05c9610b1b4\": container with ID starting with 6c7e2dbcc86b9e4b0be482b28470028317b187c9a81e8793ad2ae05c9610b1b4 not found: ID does not exist" containerID="6c7e2dbcc86b9e4b0be482b28470028317b187c9a81e8793ad2ae05c9610b1b4" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.573814 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c7e2dbcc86b9e4b0be482b28470028317b187c9a81e8793ad2ae05c9610b1b4"} err="failed to get container status \"6c7e2dbcc86b9e4b0be482b28470028317b187c9a81e8793ad2ae05c9610b1b4\": rpc error: code = NotFound desc = could not find container \"6c7e2dbcc86b9e4b0be482b28470028317b187c9a81e8793ad2ae05c9610b1b4\": container with ID starting with 6c7e2dbcc86b9e4b0be482b28470028317b187c9a81e8793ad2ae05c9610b1b4 not found: ID does not exist" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.573833 4718 scope.go:117] "RemoveContainer" containerID="1e18f3e22b1e55ac01afe5a1095f50f76452ed771766c2719f494564316c91b6" Jan 23 16:38:27 crc kubenswrapper[4718]: E0123 16:38:27.574197 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e18f3e22b1e55ac01afe5a1095f50f76452ed771766c2719f494564316c91b6\": container with ID starting with 1e18f3e22b1e55ac01afe5a1095f50f76452ed771766c2719f494564316c91b6 not found: ID does not exist" containerID="1e18f3e22b1e55ac01afe5a1095f50f76452ed771766c2719f494564316c91b6" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.580184 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e18f3e22b1e55ac01afe5a1095f50f76452ed771766c2719f494564316c91b6"} err="failed to get container status \"1e18f3e22b1e55ac01afe5a1095f50f76452ed771766c2719f494564316c91b6\": rpc error: code = NotFound desc = could not find container \"1e18f3e22b1e55ac01afe5a1095f50f76452ed771766c2719f494564316c91b6\": container with ID starting with 1e18f3e22b1e55ac01afe5a1095f50f76452ed771766c2719f494564316c91b6 not found: ID does not exist" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.581362 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:38:27 crc kubenswrapper[4718]: E0123 16:38:27.581981 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7faa3883-3a27-4ef9-9da6-476f43ad53e0" containerName="barbican-worker-log" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582003 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7faa3883-3a27-4ef9-9da6-476f43ad53e0" containerName="barbican-worker-log" Jan 23 16:38:27 crc kubenswrapper[4718]: E0123 16:38:27.582015 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d99c9a2d-5a68-454a-9516-24e28ef12bb5" containerName="proxy-httpd" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582021 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d99c9a2d-5a68-454a-9516-24e28ef12bb5" containerName="proxy-httpd" Jan 23 16:38:27 crc kubenswrapper[4718]: E0123 16:38:27.582043 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e39eff-698f-4561-bfa8-4a28b17e4559" containerName="mariadb-account-create-update" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582050 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e39eff-698f-4561-bfa8-4a28b17e4559" containerName="mariadb-account-create-update" Jan 23 16:38:27 crc kubenswrapper[4718]: E0123 16:38:27.582071 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c0d3dd-fddc-4460-9bf6-89df19751954" containerName="neutron-api" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582078 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c0d3dd-fddc-4460-9bf6-89df19751954" containerName="neutron-api" Jan 23 16:38:27 crc kubenswrapper[4718]: E0123 16:38:27.582095 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7faa3883-3a27-4ef9-9da6-476f43ad53e0" containerName="barbican-worker" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582101 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7faa3883-3a27-4ef9-9da6-476f43ad53e0" containerName="barbican-worker" Jan 23 16:38:27 crc kubenswrapper[4718]: E0123 16:38:27.582109 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13e342b5-4486-4b97-8e64-d6a189164e51" containerName="barbican-keystone-listener-log" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582116 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="13e342b5-4486-4b97-8e64-d6a189164e51" containerName="barbican-keystone-listener-log" Jan 23 16:38:27 crc kubenswrapper[4718]: E0123 16:38:27.582129 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06131a45-931f-4a9f-bb3f-a250ab6f5aaa" containerName="barbican-api" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582134 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="06131a45-931f-4a9f-bb3f-a250ab6f5aaa" containerName="barbican-api" Jan 23 16:38:27 crc kubenswrapper[4718]: E0123 16:38:27.582144 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d99c9a2d-5a68-454a-9516-24e28ef12bb5" containerName="sg-core" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582150 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d99c9a2d-5a68-454a-9516-24e28ef12bb5" containerName="sg-core" Jan 23 16:38:27 crc kubenswrapper[4718]: E0123 16:38:27.582165 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d0a620d-3a8d-418e-85d1-f0be169a3d48" containerName="dnsmasq-dns" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582174 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d0a620d-3a8d-418e-85d1-f0be169a3d48" containerName="dnsmasq-dns" Jan 23 16:38:27 crc kubenswrapper[4718]: E0123 16:38:27.582192 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c0d3dd-fddc-4460-9bf6-89df19751954" containerName="neutron-httpd" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582198 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c0d3dd-fddc-4460-9bf6-89df19751954" containerName="neutron-httpd" Jan 23 16:38:27 crc kubenswrapper[4718]: E0123 16:38:27.582206 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13e342b5-4486-4b97-8e64-d6a189164e51" containerName="barbican-keystone-listener" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582214 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="13e342b5-4486-4b97-8e64-d6a189164e51" containerName="barbican-keystone-listener" Jan 23 16:38:27 crc kubenswrapper[4718]: E0123 16:38:27.582230 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d0a620d-3a8d-418e-85d1-f0be169a3d48" containerName="init" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582236 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d0a620d-3a8d-418e-85d1-f0be169a3d48" containerName="init" Jan 23 16:38:27 crc kubenswrapper[4718]: E0123 16:38:27.582248 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d99c9a2d-5a68-454a-9516-24e28ef12bb5" containerName="ceilometer-notification-agent" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582255 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d99c9a2d-5a68-454a-9516-24e28ef12bb5" containerName="ceilometer-notification-agent" Jan 23 16:38:27 crc kubenswrapper[4718]: E0123 16:38:27.582268 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06131a45-931f-4a9f-bb3f-a250ab6f5aaa" containerName="barbican-api-log" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582275 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="06131a45-931f-4a9f-bb3f-a250ab6f5aaa" containerName="barbican-api-log" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582523 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d99c9a2d-5a68-454a-9516-24e28ef12bb5" containerName="proxy-httpd" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582543 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d99c9a2d-5a68-454a-9516-24e28ef12bb5" containerName="ceilometer-notification-agent" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582553 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="06131a45-931f-4a9f-bb3f-a250ab6f5aaa" containerName="barbican-api-log" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582562 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="06131a45-931f-4a9f-bb3f-a250ab6f5aaa" containerName="barbican-api" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582574 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c0d3dd-fddc-4460-9bf6-89df19751954" containerName="neutron-api" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582585 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="13e342b5-4486-4b97-8e64-d6a189164e51" containerName="barbican-keystone-listener-log" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582593 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d99c9a2d-5a68-454a-9516-24e28ef12bb5" containerName="sg-core" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582604 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7faa3883-3a27-4ef9-9da6-476f43ad53e0" containerName="barbican-worker-log" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582614 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="13e342b5-4486-4b97-8e64-d6a189164e51" containerName="barbican-keystone-listener" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582651 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7faa3883-3a27-4ef9-9da6-476f43ad53e0" containerName="barbican-worker" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582661 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c0d3dd-fddc-4460-9bf6-89df19751954" containerName="neutron-httpd" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582670 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d0a620d-3a8d-418e-85d1-f0be169a3d48" containerName="dnsmasq-dns" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.582680 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9e39eff-698f-4561-bfa8-4a28b17e4559" containerName="mariadb-account-create-update" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.584879 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.587204 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.587272 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.591124 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.655661 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q52m7\" (UniqueName: \"kubernetes.io/projected/2d386334-9880-4350-85e7-677ae774bbfe-kube-api-access-q52m7\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.655736 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d386334-9880-4350-85e7-677ae774bbfe-log-httpd\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.656122 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.656177 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d386334-9880-4350-85e7-677ae774bbfe-run-httpd\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.656371 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.658453 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-config-data\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.658575 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-scripts\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.710701 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mpk4z"] Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.712500 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mpk4z" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.714875 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.727525 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mpk4z"] Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.762158 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-config-data\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.762700 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-scripts\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.762885 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q52m7\" (UniqueName: \"kubernetes.io/projected/2d386334-9880-4350-85e7-677ae774bbfe-kube-api-access-q52m7\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.763001 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d386334-9880-4350-85e7-677ae774bbfe-log-httpd\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.763117 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef0d436-0c13-4aaf-abdd-cac57a46ee2a-operator-scripts\") pod \"root-account-create-update-mpk4z\" (UID: \"eef0d436-0c13-4aaf-abdd-cac57a46ee2a\") " pod="openstack/root-account-create-update-mpk4z" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.763229 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdskt\" (UniqueName: \"kubernetes.io/projected/eef0d436-0c13-4aaf-abdd-cac57a46ee2a-kube-api-access-cdskt\") pod \"root-account-create-update-mpk4z\" (UID: \"eef0d436-0c13-4aaf-abdd-cac57a46ee2a\") " pod="openstack/root-account-create-update-mpk4z" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.763305 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.763384 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d386334-9880-4350-85e7-677ae774bbfe-run-httpd\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.763556 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.763813 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d386334-9880-4350-85e7-677ae774bbfe-log-httpd\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.764993 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d386334-9880-4350-85e7-677ae774bbfe-run-httpd\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.770669 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.770799 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-config-data\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.771773 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-scripts\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.780868 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.781903 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q52m7\" (UniqueName: \"kubernetes.io/projected/2d386334-9880-4350-85e7-677ae774bbfe-kube-api-access-q52m7\") pod \"ceilometer-0\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " pod="openstack/ceilometer-0" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.882843 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdskt\" (UniqueName: \"kubernetes.io/projected/eef0d436-0c13-4aaf-abdd-cac57a46ee2a-kube-api-access-cdskt\") pod \"root-account-create-update-mpk4z\" (UID: \"eef0d436-0c13-4aaf-abdd-cac57a46ee2a\") " pod="openstack/root-account-create-update-mpk4z" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.883259 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef0d436-0c13-4aaf-abdd-cac57a46ee2a-operator-scripts\") pod \"root-account-create-update-mpk4z\" (UID: \"eef0d436-0c13-4aaf-abdd-cac57a46ee2a\") " pod="openstack/root-account-create-update-mpk4z" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.884414 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef0d436-0c13-4aaf-abdd-cac57a46ee2a-operator-scripts\") pod \"root-account-create-update-mpk4z\" (UID: \"eef0d436-0c13-4aaf-abdd-cac57a46ee2a\") " pod="openstack/root-account-create-update-mpk4z" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.907362 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdskt\" (UniqueName: \"kubernetes.io/projected/eef0d436-0c13-4aaf-abdd-cac57a46ee2a-kube-api-access-cdskt\") pod \"root-account-create-update-mpk4z\" (UID: \"eef0d436-0c13-4aaf-abdd-cac57a46ee2a\") " pod="openstack/root-account-create-update-mpk4z" Jan 23 16:38:27 crc kubenswrapper[4718]: I0123 16:38:27.920369 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:38:28 crc kubenswrapper[4718]: I0123 16:38:28.032729 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mpk4z" Jan 23 16:38:28 crc kubenswrapper[4718]: I0123 16:38:28.408534 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:38:28 crc kubenswrapper[4718]: I0123 16:38:28.443608 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6dffc5fb8-5997w" Jan 23 16:38:28 crc kubenswrapper[4718]: I0123 16:38:28.446031 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d386334-9880-4350-85e7-677ae774bbfe","Type":"ContainerStarted","Data":"53420e58f1dc57f3eb13a65dab9fde4ea2e198c2daac47b807bd90065581f50e"} Jan 23 16:38:28 crc kubenswrapper[4718]: I0123 16:38:28.617156 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:38:28 crc kubenswrapper[4718]: I0123 16:38:28.627713 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mpk4z"] Jan 23 16:38:28 crc kubenswrapper[4718]: I0123 16:38:28.777137 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-ld95m"] Jan 23 16:38:28 crc kubenswrapper[4718]: I0123 16:38:28.777453 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" podUID="50e2741c-c631-42d1-bc2a-71292bbcfe61" containerName="dnsmasq-dns" containerID="cri-o://63ae5cd414289f120bd643906abd81fcc995c7ab6486b15baf278693935e168e" gracePeriod=10 Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.012732 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.066673 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.187100 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d99c9a2d-5a68-454a-9516-24e28ef12bb5" path="/var/lib/kubelet/pods/d99c9a2d-5a68-454a-9516-24e28ef12bb5/volumes" Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.469616 4718 generic.go:334] "Generic (PLEG): container finished" podID="50e2741c-c631-42d1-bc2a-71292bbcfe61" containerID="63ae5cd414289f120bd643906abd81fcc995c7ab6486b15baf278693935e168e" exitCode=0 Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.469779 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" event={"ID":"50e2741c-c631-42d1-bc2a-71292bbcfe61","Type":"ContainerDied","Data":"63ae5cd414289f120bd643906abd81fcc995c7ab6486b15baf278693935e168e"} Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.470218 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" event={"ID":"50e2741c-c631-42d1-bc2a-71292bbcfe61","Type":"ContainerDied","Data":"888c744adf64ccc84a42f00cfc1f29f63c2b47b792834e03ba149db03071afc5"} Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.470244 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="888c744adf64ccc84a42f00cfc1f29f63c2b47b792834e03ba149db03071afc5" Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.474120 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mpk4z" event={"ID":"eef0d436-0c13-4aaf-abdd-cac57a46ee2a","Type":"ContainerStarted","Data":"eb34100650b2eb4b495ca7df5a60b353fdd6bc115d1d32973aa36891ea35b349"} Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.474236 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mpk4z" event={"ID":"eef0d436-0c13-4aaf-abdd-cac57a46ee2a","Type":"ContainerStarted","Data":"896e1af7bde123054712191570b699206768ac1270407f9e05d1fe9c29e6fcff"} Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.474349 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8e8f68ad-bb83-4f4f-a545-c52cc0ca4535" containerName="cinder-scheduler" containerID="cri-o://6bc00aeef6027ccb83d10d3a69bb83ba9595dda4ec9a3c92a55987fffec045c2" gracePeriod=30 Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.474448 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8e8f68ad-bb83-4f4f-a545-c52cc0ca4535" containerName="probe" containerID="cri-o://007cd94357f3d9f181dde407729511b7c9dea9929675039e104fe8845fccad25" gracePeriod=30 Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.504433 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-mpk4z" podStartSLOduration=2.5043973470000003 podStartE2EDuration="2.504397347s" podCreationTimestamp="2026-01-23 16:38:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:38:29.496137753 +0000 UTC m=+1310.643379744" watchObservedRunningTime="2026-01-23 16:38:29.504397347 +0000 UTC m=+1310.651639338" Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.668236 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.752277 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-config\") pod \"50e2741c-c631-42d1-bc2a-71292bbcfe61\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.752537 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-dns-svc\") pod \"50e2741c-c631-42d1-bc2a-71292bbcfe61\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.752602 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-ovsdbserver-sb\") pod \"50e2741c-c631-42d1-bc2a-71292bbcfe61\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.752685 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-ovsdbserver-nb\") pod \"50e2741c-c631-42d1-bc2a-71292bbcfe61\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.752918 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5pvl\" (UniqueName: \"kubernetes.io/projected/50e2741c-c631-42d1-bc2a-71292bbcfe61-kube-api-access-l5pvl\") pod \"50e2741c-c631-42d1-bc2a-71292bbcfe61\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.753010 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-dns-swift-storage-0\") pod \"50e2741c-c631-42d1-bc2a-71292bbcfe61\" (UID: \"50e2741c-c631-42d1-bc2a-71292bbcfe61\") " Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.771773 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e2741c-c631-42d1-bc2a-71292bbcfe61-kube-api-access-l5pvl" (OuterVolumeSpecName: "kube-api-access-l5pvl") pod "50e2741c-c631-42d1-bc2a-71292bbcfe61" (UID: "50e2741c-c631-42d1-bc2a-71292bbcfe61"). InnerVolumeSpecName "kube-api-access-l5pvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.828508 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "50e2741c-c631-42d1-bc2a-71292bbcfe61" (UID: "50e2741c-c631-42d1-bc2a-71292bbcfe61"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.859785 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.859820 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5pvl\" (UniqueName: \"kubernetes.io/projected/50e2741c-c631-42d1-bc2a-71292bbcfe61-kube-api-access-l5pvl\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.874806 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "50e2741c-c631-42d1-bc2a-71292bbcfe61" (UID: "50e2741c-c631-42d1-bc2a-71292bbcfe61"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.874836 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "50e2741c-c631-42d1-bc2a-71292bbcfe61" (UID: "50e2741c-c631-42d1-bc2a-71292bbcfe61"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.891316 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-config" (OuterVolumeSpecName: "config") pod "50e2741c-c631-42d1-bc2a-71292bbcfe61" (UID: "50e2741c-c631-42d1-bc2a-71292bbcfe61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.917413 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "50e2741c-c631-42d1-bc2a-71292bbcfe61" (UID: "50e2741c-c631-42d1-bc2a-71292bbcfe61"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.962741 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.962775 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.962788 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:29 crc kubenswrapper[4718]: I0123 16:38:29.962799 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50e2741c-c631-42d1-bc2a-71292bbcfe61-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:30 crc kubenswrapper[4718]: I0123 16:38:30.489571 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d386334-9880-4350-85e7-677ae774bbfe","Type":"ContainerStarted","Data":"752f4a41ec586686185358e6698c9dea6986463f8b5bdae70f873e0e9a8d1f9b"} Jan 23 16:38:30 crc kubenswrapper[4718]: I0123 16:38:30.491209 4718 generic.go:334] "Generic (PLEG): container finished" podID="eef0d436-0c13-4aaf-abdd-cac57a46ee2a" containerID="eb34100650b2eb4b495ca7df5a60b353fdd6bc115d1d32973aa36891ea35b349" exitCode=0 Jan 23 16:38:30 crc kubenswrapper[4718]: I0123 16:38:30.491310 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mpk4z" event={"ID":"eef0d436-0c13-4aaf-abdd-cac57a46ee2a","Type":"ContainerDied","Data":"eb34100650b2eb4b495ca7df5a60b353fdd6bc115d1d32973aa36891ea35b349"} Jan 23 16:38:30 crc kubenswrapper[4718]: I0123 16:38:30.493697 4718 generic.go:334] "Generic (PLEG): container finished" podID="8e8f68ad-bb83-4f4f-a545-c52cc0ca4535" containerID="007cd94357f3d9f181dde407729511b7c9dea9929675039e104fe8845fccad25" exitCode=0 Jan 23 16:38:30 crc kubenswrapper[4718]: I0123 16:38:30.493804 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-ld95m" Jan 23 16:38:30 crc kubenswrapper[4718]: I0123 16:38:30.493800 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535","Type":"ContainerDied","Data":"007cd94357f3d9f181dde407729511b7c9dea9929675039e104fe8845fccad25"} Jan 23 16:38:30 crc kubenswrapper[4718]: I0123 16:38:30.589826 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-ld95m"] Jan 23 16:38:30 crc kubenswrapper[4718]: I0123 16:38:30.601297 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-ld95m"] Jan 23 16:38:31 crc kubenswrapper[4718]: I0123 16:38:31.156129 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50e2741c-c631-42d1-bc2a-71292bbcfe61" path="/var/lib/kubelet/pods/50e2741c-c631-42d1-bc2a-71292bbcfe61/volumes" Jan 23 16:38:31 crc kubenswrapper[4718]: I0123 16:38:31.474484 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 16:38:31 crc kubenswrapper[4718]: I0123 16:38:31.514308 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d386334-9880-4350-85e7-677ae774bbfe","Type":"ContainerStarted","Data":"b16239793bd24103e3b05e1f2a4de5bca0050368af76f5f4740bbb7571f4fc9a"} Jan 23 16:38:32 crc kubenswrapper[4718]: I0123 16:38:32.007652 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mpk4z" Jan 23 16:38:32 crc kubenswrapper[4718]: I0123 16:38:32.137898 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef0d436-0c13-4aaf-abdd-cac57a46ee2a-operator-scripts\") pod \"eef0d436-0c13-4aaf-abdd-cac57a46ee2a\" (UID: \"eef0d436-0c13-4aaf-abdd-cac57a46ee2a\") " Jan 23 16:38:32 crc kubenswrapper[4718]: I0123 16:38:32.138188 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdskt\" (UniqueName: \"kubernetes.io/projected/eef0d436-0c13-4aaf-abdd-cac57a46ee2a-kube-api-access-cdskt\") pod \"eef0d436-0c13-4aaf-abdd-cac57a46ee2a\" (UID: \"eef0d436-0c13-4aaf-abdd-cac57a46ee2a\") " Jan 23 16:38:32 crc kubenswrapper[4718]: I0123 16:38:32.138564 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eef0d436-0c13-4aaf-abdd-cac57a46ee2a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eef0d436-0c13-4aaf-abdd-cac57a46ee2a" (UID: "eef0d436-0c13-4aaf-abdd-cac57a46ee2a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:32 crc kubenswrapper[4718]: I0123 16:38:32.140558 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef0d436-0c13-4aaf-abdd-cac57a46ee2a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:32 crc kubenswrapper[4718]: I0123 16:38:32.148890 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eef0d436-0c13-4aaf-abdd-cac57a46ee2a-kube-api-access-cdskt" (OuterVolumeSpecName: "kube-api-access-cdskt") pod "eef0d436-0c13-4aaf-abdd-cac57a46ee2a" (UID: "eef0d436-0c13-4aaf-abdd-cac57a46ee2a"). InnerVolumeSpecName "kube-api-access-cdskt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:32 crc kubenswrapper[4718]: I0123 16:38:32.242791 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdskt\" (UniqueName: \"kubernetes.io/projected/eef0d436-0c13-4aaf-abdd-cac57a46ee2a-kube-api-access-cdskt\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:32 crc kubenswrapper[4718]: I0123 16:38:32.551943 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d386334-9880-4350-85e7-677ae774bbfe","Type":"ContainerStarted","Data":"7b3d3bba1c12fc4b4a83848d3a67b253d7d522ff10304993d1eeab0521eaa910"} Jan 23 16:38:32 crc kubenswrapper[4718]: I0123 16:38:32.557209 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mpk4z" event={"ID":"eef0d436-0c13-4aaf-abdd-cac57a46ee2a","Type":"ContainerDied","Data":"896e1af7bde123054712191570b699206768ac1270407f9e05d1fe9c29e6fcff"} Jan 23 16:38:32 crc kubenswrapper[4718]: I0123 16:38:32.557615 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="896e1af7bde123054712191570b699206768ac1270407f9e05d1fe9c29e6fcff" Jan 23 16:38:32 crc kubenswrapper[4718]: I0123 16:38:32.557722 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mpk4z" Jan 23 16:38:32 crc kubenswrapper[4718]: I0123 16:38:32.649785 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-f6b6f5fd7-qpbqz" Jan 23 16:38:33 crc kubenswrapper[4718]: I0123 16:38:33.572706 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d386334-9880-4350-85e7-677ae774bbfe","Type":"ContainerStarted","Data":"3923574407ad2675f8d4a2a15bc52847c53df28e40ba6d73c53b48b2fb157c1a"} Jan 23 16:38:33 crc kubenswrapper[4718]: I0123 16:38:33.573330 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 16:38:33 crc kubenswrapper[4718]: I0123 16:38:33.597669 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.880097975 podStartE2EDuration="6.597621856s" podCreationTimestamp="2026-01-23 16:38:27 +0000 UTC" firstStartedPulling="2026-01-23 16:38:28.428769847 +0000 UTC m=+1309.576011838" lastFinishedPulling="2026-01-23 16:38:33.146293728 +0000 UTC m=+1314.293535719" observedRunningTime="2026-01-23 16:38:33.595797147 +0000 UTC m=+1314.743039128" watchObservedRunningTime="2026-01-23 16:38:33.597621856 +0000 UTC m=+1314.744863857" Jan 23 16:38:34 crc kubenswrapper[4718]: I0123 16:38:34.600917 4718 generic.go:334] "Generic (PLEG): container finished" podID="8e8f68ad-bb83-4f4f-a545-c52cc0ca4535" containerID="6bc00aeef6027ccb83d10d3a69bb83ba9595dda4ec9a3c92a55987fffec045c2" exitCode=0 Jan 23 16:38:34 crc kubenswrapper[4718]: I0123 16:38:34.601119 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535","Type":"ContainerDied","Data":"6bc00aeef6027ccb83d10d3a69bb83ba9595dda4ec9a3c92a55987fffec045c2"} Jan 23 16:38:34 crc kubenswrapper[4718]: I0123 16:38:34.820225 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 16:38:34 crc kubenswrapper[4718]: I0123 16:38:34.912538 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-292d4\" (UniqueName: \"kubernetes.io/projected/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-kube-api-access-292d4\") pod \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " Jan 23 16:38:34 crc kubenswrapper[4718]: I0123 16:38:34.913130 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-etc-machine-id\") pod \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " Jan 23 16:38:34 crc kubenswrapper[4718]: I0123 16:38:34.913209 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-combined-ca-bundle\") pod \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " Jan 23 16:38:34 crc kubenswrapper[4718]: I0123 16:38:34.913310 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-config-data-custom\") pod \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " Jan 23 16:38:34 crc kubenswrapper[4718]: I0123 16:38:34.913375 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-scripts\") pod \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " Jan 23 16:38:34 crc kubenswrapper[4718]: I0123 16:38:34.913291 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8e8f68ad-bb83-4f4f-a545-c52cc0ca4535" (UID: "8e8f68ad-bb83-4f4f-a545-c52cc0ca4535"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:38:34 crc kubenswrapper[4718]: I0123 16:38:34.913493 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-config-data\") pod \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\" (UID: \"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535\") " Jan 23 16:38:34 crc kubenswrapper[4718]: I0123 16:38:34.913998 4718 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:34 crc kubenswrapper[4718]: I0123 16:38:34.920697 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-kube-api-access-292d4" (OuterVolumeSpecName: "kube-api-access-292d4") pod "8e8f68ad-bb83-4f4f-a545-c52cc0ca4535" (UID: "8e8f68ad-bb83-4f4f-a545-c52cc0ca4535"). InnerVolumeSpecName "kube-api-access-292d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:34 crc kubenswrapper[4718]: I0123 16:38:34.922313 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-scripts" (OuterVolumeSpecName: "scripts") pod "8e8f68ad-bb83-4f4f-a545-c52cc0ca4535" (UID: "8e8f68ad-bb83-4f4f-a545-c52cc0ca4535"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:34 crc kubenswrapper[4718]: I0123 16:38:34.943804 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8e8f68ad-bb83-4f4f-a545-c52cc0ca4535" (UID: "8e8f68ad-bb83-4f4f-a545-c52cc0ca4535"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:34 crc kubenswrapper[4718]: I0123 16:38:34.985773 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e8f68ad-bb83-4f4f-a545-c52cc0ca4535" (UID: "8e8f68ad-bb83-4f4f-a545-c52cc0ca4535"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.018247 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-292d4\" (UniqueName: \"kubernetes.io/projected/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-kube-api-access-292d4\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.018505 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.018565 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.018618 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.057944 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-config-data" (OuterVolumeSpecName: "config-data") pod "8e8f68ad-bb83-4f4f-a545-c52cc0ca4535" (UID: "8e8f68ad-bb83-4f4f-a545-c52cc0ca4535"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.121093 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.236837 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 23 16:38:35 crc kubenswrapper[4718]: E0123 16:38:35.237361 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8f68ad-bb83-4f4f-a545-c52cc0ca4535" containerName="cinder-scheduler" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.237382 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8f68ad-bb83-4f4f-a545-c52cc0ca4535" containerName="cinder-scheduler" Jan 23 16:38:35 crc kubenswrapper[4718]: E0123 16:38:35.237408 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e2741c-c631-42d1-bc2a-71292bbcfe61" containerName="init" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.237416 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e2741c-c631-42d1-bc2a-71292bbcfe61" containerName="init" Jan 23 16:38:35 crc kubenswrapper[4718]: E0123 16:38:35.237434 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef0d436-0c13-4aaf-abdd-cac57a46ee2a" containerName="mariadb-account-create-update" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.237441 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef0d436-0c13-4aaf-abdd-cac57a46ee2a" containerName="mariadb-account-create-update" Jan 23 16:38:35 crc kubenswrapper[4718]: E0123 16:38:35.237451 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e2741c-c631-42d1-bc2a-71292bbcfe61" containerName="dnsmasq-dns" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.237456 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e2741c-c631-42d1-bc2a-71292bbcfe61" containerName="dnsmasq-dns" Jan 23 16:38:35 crc kubenswrapper[4718]: E0123 16:38:35.237494 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8f68ad-bb83-4f4f-a545-c52cc0ca4535" containerName="probe" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.237502 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8f68ad-bb83-4f4f-a545-c52cc0ca4535" containerName="probe" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.237728 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="50e2741c-c631-42d1-bc2a-71292bbcfe61" containerName="dnsmasq-dns" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.237752 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="eef0d436-0c13-4aaf-abdd-cac57a46ee2a" containerName="mariadb-account-create-update" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.237766 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e8f68ad-bb83-4f4f-a545-c52cc0ca4535" containerName="cinder-scheduler" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.237786 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e8f68ad-bb83-4f4f-a545-c52cc0ca4535" containerName="probe" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.238752 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.251111 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.251138 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.251295 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-8zr45" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.256724 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.327079 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr27q\" (UniqueName: \"kubernetes.io/projected/ed1d44ad-8796-452b-a194-17b351fc8c01-kube-api-access-rr27q\") pod \"openstackclient\" (UID: \"ed1d44ad-8796-452b-a194-17b351fc8c01\") " pod="openstack/openstackclient" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.327253 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1d44ad-8796-452b-a194-17b351fc8c01-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ed1d44ad-8796-452b-a194-17b351fc8c01\") " pod="openstack/openstackclient" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.327288 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ed1d44ad-8796-452b-a194-17b351fc8c01-openstack-config-secret\") pod \"openstackclient\" (UID: \"ed1d44ad-8796-452b-a194-17b351fc8c01\") " pod="openstack/openstackclient" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.327319 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ed1d44ad-8796-452b-a194-17b351fc8c01-openstack-config\") pod \"openstackclient\" (UID: \"ed1d44ad-8796-452b-a194-17b351fc8c01\") " pod="openstack/openstackclient" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.430074 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr27q\" (UniqueName: \"kubernetes.io/projected/ed1d44ad-8796-452b-a194-17b351fc8c01-kube-api-access-rr27q\") pod \"openstackclient\" (UID: \"ed1d44ad-8796-452b-a194-17b351fc8c01\") " pod="openstack/openstackclient" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.430483 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1d44ad-8796-452b-a194-17b351fc8c01-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ed1d44ad-8796-452b-a194-17b351fc8c01\") " pod="openstack/openstackclient" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.430598 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ed1d44ad-8796-452b-a194-17b351fc8c01-openstack-config-secret\") pod \"openstackclient\" (UID: \"ed1d44ad-8796-452b-a194-17b351fc8c01\") " pod="openstack/openstackclient" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.430718 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ed1d44ad-8796-452b-a194-17b351fc8c01-openstack-config\") pod \"openstackclient\" (UID: \"ed1d44ad-8796-452b-a194-17b351fc8c01\") " pod="openstack/openstackclient" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.432926 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ed1d44ad-8796-452b-a194-17b351fc8c01-openstack-config\") pod \"openstackclient\" (UID: \"ed1d44ad-8796-452b-a194-17b351fc8c01\") " pod="openstack/openstackclient" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.445366 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ed1d44ad-8796-452b-a194-17b351fc8c01-openstack-config-secret\") pod \"openstackclient\" (UID: \"ed1d44ad-8796-452b-a194-17b351fc8c01\") " pod="openstack/openstackclient" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.445725 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1d44ad-8796-452b-a194-17b351fc8c01-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ed1d44ad-8796-452b-a194-17b351fc8c01\") " pod="openstack/openstackclient" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.451246 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr27q\" (UniqueName: \"kubernetes.io/projected/ed1d44ad-8796-452b-a194-17b351fc8c01-kube-api-access-rr27q\") pod \"openstackclient\" (UID: \"ed1d44ad-8796-452b-a194-17b351fc8c01\") " pod="openstack/openstackclient" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.563799 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.620313 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8e8f68ad-bb83-4f4f-a545-c52cc0ca4535","Type":"ContainerDied","Data":"9859d5edfecb927be1e584a992643ef6f92d7a4edcdf41c328ab7c17a6dbc96d"} Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.620391 4718 scope.go:117] "RemoveContainer" containerID="007cd94357f3d9f181dde407729511b7c9dea9929675039e104fe8845fccad25" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.620601 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.750794 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.764989 4718 scope.go:117] "RemoveContainer" containerID="6bc00aeef6027ccb83d10d3a69bb83ba9595dda4ec9a3c92a55987fffec045c2" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.777919 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.802096 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.804599 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.807803 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.829290 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.942139 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzlt9\" (UniqueName: \"kubernetes.io/projected/f1c0f246-5016-4f2f-94a8-5805981faffc-kube-api-access-lzlt9\") pod \"cinder-scheduler-0\" (UID: \"f1c0f246-5016-4f2f-94a8-5805981faffc\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.942648 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c0f246-5016-4f2f-94a8-5805981faffc-config-data\") pod \"cinder-scheduler-0\" (UID: \"f1c0f246-5016-4f2f-94a8-5805981faffc\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.942688 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c0f246-5016-4f2f-94a8-5805981faffc-scripts\") pod \"cinder-scheduler-0\" (UID: \"f1c0f246-5016-4f2f-94a8-5805981faffc\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.942740 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1c0f246-5016-4f2f-94a8-5805981faffc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f1c0f246-5016-4f2f-94a8-5805981faffc\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.942763 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f1c0f246-5016-4f2f-94a8-5805981faffc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f1c0f246-5016-4f2f-94a8-5805981faffc\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:35 crc kubenswrapper[4718]: I0123 16:38:35.942780 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c0f246-5016-4f2f-94a8-5805981faffc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f1c0f246-5016-4f2f-94a8-5805981faffc\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:36 crc kubenswrapper[4718]: I0123 16:38:36.044885 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f1c0f246-5016-4f2f-94a8-5805981faffc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f1c0f246-5016-4f2f-94a8-5805981faffc\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:36 crc kubenswrapper[4718]: I0123 16:38:36.044934 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c0f246-5016-4f2f-94a8-5805981faffc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f1c0f246-5016-4f2f-94a8-5805981faffc\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:36 crc kubenswrapper[4718]: I0123 16:38:36.045042 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f1c0f246-5016-4f2f-94a8-5805981faffc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f1c0f246-5016-4f2f-94a8-5805981faffc\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:36 crc kubenswrapper[4718]: I0123 16:38:36.045080 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzlt9\" (UniqueName: \"kubernetes.io/projected/f1c0f246-5016-4f2f-94a8-5805981faffc-kube-api-access-lzlt9\") pod \"cinder-scheduler-0\" (UID: \"f1c0f246-5016-4f2f-94a8-5805981faffc\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:36 crc kubenswrapper[4718]: I0123 16:38:36.045204 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c0f246-5016-4f2f-94a8-5805981faffc-config-data\") pod \"cinder-scheduler-0\" (UID: \"f1c0f246-5016-4f2f-94a8-5805981faffc\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:36 crc kubenswrapper[4718]: I0123 16:38:36.045280 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c0f246-5016-4f2f-94a8-5805981faffc-scripts\") pod \"cinder-scheduler-0\" (UID: \"f1c0f246-5016-4f2f-94a8-5805981faffc\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:36 crc kubenswrapper[4718]: I0123 16:38:36.045439 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1c0f246-5016-4f2f-94a8-5805981faffc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f1c0f246-5016-4f2f-94a8-5805981faffc\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:36 crc kubenswrapper[4718]: I0123 16:38:36.052680 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c0f246-5016-4f2f-94a8-5805981faffc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f1c0f246-5016-4f2f-94a8-5805981faffc\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:36 crc kubenswrapper[4718]: I0123 16:38:36.054529 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c0f246-5016-4f2f-94a8-5805981faffc-config-data\") pod \"cinder-scheduler-0\" (UID: \"f1c0f246-5016-4f2f-94a8-5805981faffc\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:36 crc kubenswrapper[4718]: I0123 16:38:36.054910 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c0f246-5016-4f2f-94a8-5805981faffc-scripts\") pod \"cinder-scheduler-0\" (UID: \"f1c0f246-5016-4f2f-94a8-5805981faffc\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:36 crc kubenswrapper[4718]: I0123 16:38:36.055720 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1c0f246-5016-4f2f-94a8-5805981faffc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f1c0f246-5016-4f2f-94a8-5805981faffc\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:36 crc kubenswrapper[4718]: I0123 16:38:36.070174 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzlt9\" (UniqueName: \"kubernetes.io/projected/f1c0f246-5016-4f2f-94a8-5805981faffc-kube-api-access-lzlt9\") pod \"cinder-scheduler-0\" (UID: \"f1c0f246-5016-4f2f-94a8-5805981faffc\") " pod="openstack/cinder-scheduler-0" Jan 23 16:38:36 crc kubenswrapper[4718]: I0123 16:38:36.128698 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 16:38:36 crc kubenswrapper[4718]: I0123 16:38:36.153877 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 16:38:36 crc kubenswrapper[4718]: I0123 16:38:36.404792 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mpk4z"] Jan 23 16:38:36 crc kubenswrapper[4718]: I0123 16:38:36.411836 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mpk4z"] Jan 23 16:38:36 crc kubenswrapper[4718]: I0123 16:38:36.634167 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ed1d44ad-8796-452b-a194-17b351fc8c01","Type":"ContainerStarted","Data":"7b0fcc00c29e53b5909046e4d7b56279411acd3ac46c77538659b8d928abcfef"} Jan 23 16:38:36 crc kubenswrapper[4718]: I0123 16:38:36.641022 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 16:38:37 crc kubenswrapper[4718]: I0123 16:38:37.219734 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e8f68ad-bb83-4f4f-a545-c52cc0ca4535" path="/var/lib/kubelet/pods/8e8f68ad-bb83-4f4f-a545-c52cc0ca4535/volumes" Jan 23 16:38:37 crc kubenswrapper[4718]: I0123 16:38:37.223778 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eef0d436-0c13-4aaf-abdd-cac57a46ee2a" path="/var/lib/kubelet/pods/eef0d436-0c13-4aaf-abdd-cac57a46ee2a/volumes" Jan 23 16:38:37 crc kubenswrapper[4718]: I0123 16:38:37.676620 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f1c0f246-5016-4f2f-94a8-5805981faffc","Type":"ContainerStarted","Data":"d699a9f0f9183e672844e1cc7c5ee6d48e81aae73124cdb68741b5b8b1711e46"} Jan 23 16:38:37 crc kubenswrapper[4718]: I0123 16:38:37.677173 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f1c0f246-5016-4f2f-94a8-5805981faffc","Type":"ContainerStarted","Data":"3da89f7774f684064df46c73fc6899f0e962dd7418ecd2f72fca33899f3d1ef1"} Jan 23 16:38:38 crc kubenswrapper[4718]: I0123 16:38:38.689339 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f1c0f246-5016-4f2f-94a8-5805981faffc","Type":"ContainerStarted","Data":"06dca8a9d6046fd352a680d288c544329fa3d3c87c7cb5b64bf04fbbaf050a0c"} Jan 23 16:38:38 crc kubenswrapper[4718]: I0123 16:38:38.709165 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.709142759 podStartE2EDuration="3.709142759s" podCreationTimestamp="2026-01-23 16:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:38:38.705589583 +0000 UTC m=+1319.852831584" watchObservedRunningTime="2026-01-23 16:38:38.709142759 +0000 UTC m=+1319.856384750" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.442009 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5b6b78dc95-9ft97"] Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.445265 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.447741 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.455578 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.474209 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.490843 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5b6b78dc95-9ft97"] Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.594285 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-log-httpd\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.594367 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-public-tls-certs\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.594392 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-combined-ca-bundle\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.594480 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-config-data\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.594727 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-run-httpd\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.594782 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-etc-swift\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.594825 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-internal-tls-certs\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.594911 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cjvp\" (UniqueName: \"kubernetes.io/projected/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-kube-api-access-9cjvp\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.700030 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-log-httpd\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.700090 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-public-tls-certs\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.700115 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-combined-ca-bundle\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.700990 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-log-httpd\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.717898 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-public-tls-certs\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.719274 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-combined-ca-bundle\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.720705 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-config-data\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.721228 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-run-httpd\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.721317 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-etc-swift\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.721441 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-internal-tls-certs\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.721547 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cjvp\" (UniqueName: \"kubernetes.io/projected/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-kube-api-access-9cjvp\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.729812 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-run-httpd\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.732969 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-config-data\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.740868 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-etc-swift\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.743298 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cjvp\" (UniqueName: \"kubernetes.io/projected/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-kube-api-access-9cjvp\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.765331 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1-internal-tls-certs\") pod \"swift-proxy-5b6b78dc95-9ft97\" (UID: \"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1\") " pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:40 crc kubenswrapper[4718]: I0123 16:38:40.781709 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.131774 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.440546 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-s285q"] Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.442601 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s285q" Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.448513 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.452351 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-s285q"] Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.548678 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rcf8\" (UniqueName: \"kubernetes.io/projected/cf77eff7-55fd-45ad-8e28-7c437167cc0b-kube-api-access-8rcf8\") pod \"root-account-create-update-s285q\" (UID: \"cf77eff7-55fd-45ad-8e28-7c437167cc0b\") " pod="openstack/root-account-create-update-s285q" Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.548765 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf77eff7-55fd-45ad-8e28-7c437167cc0b-operator-scripts\") pod \"root-account-create-update-s285q\" (UID: \"cf77eff7-55fd-45ad-8e28-7c437167cc0b\") " pod="openstack/root-account-create-update-s285q" Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.635818 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5b6b78dc95-9ft97"] Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.653200 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rcf8\" (UniqueName: \"kubernetes.io/projected/cf77eff7-55fd-45ad-8e28-7c437167cc0b-kube-api-access-8rcf8\") pod \"root-account-create-update-s285q\" (UID: \"cf77eff7-55fd-45ad-8e28-7c437167cc0b\") " pod="openstack/root-account-create-update-s285q" Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.653315 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf77eff7-55fd-45ad-8e28-7c437167cc0b-operator-scripts\") pod \"root-account-create-update-s285q\" (UID: \"cf77eff7-55fd-45ad-8e28-7c437167cc0b\") " pod="openstack/root-account-create-update-s285q" Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.654500 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf77eff7-55fd-45ad-8e28-7c437167cc0b-operator-scripts\") pod \"root-account-create-update-s285q\" (UID: \"cf77eff7-55fd-45ad-8e28-7c437167cc0b\") " pod="openstack/root-account-create-update-s285q" Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.672989 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rcf8\" (UniqueName: \"kubernetes.io/projected/cf77eff7-55fd-45ad-8e28-7c437167cc0b-kube-api-access-8rcf8\") pod \"root-account-create-update-s285q\" (UID: \"cf77eff7-55fd-45ad-8e28-7c437167cc0b\") " pod="openstack/root-account-create-update-s285q" Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.765394 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s285q" Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.772056 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b6b78dc95-9ft97" event={"ID":"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1","Type":"ContainerStarted","Data":"62fc2b420c98736801a3e5c63c8873a623cb5a3e8a515a746b56e16e1e9a0da5"} Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.833830 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.834547 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d386334-9880-4350-85e7-677ae774bbfe" containerName="ceilometer-central-agent" containerID="cri-o://752f4a41ec586686185358e6698c9dea6986463f8b5bdae70f873e0e9a8d1f9b" gracePeriod=30 Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.835088 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d386334-9880-4350-85e7-677ae774bbfe" containerName="proxy-httpd" containerID="cri-o://3923574407ad2675f8d4a2a15bc52847c53df28e40ba6d73c53b48b2fb157c1a" gracePeriod=30 Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.835148 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d386334-9880-4350-85e7-677ae774bbfe" containerName="sg-core" containerID="cri-o://7b3d3bba1c12fc4b4a83848d3a67b253d7d522ff10304993d1eeab0521eaa910" gracePeriod=30 Jan 23 16:38:41 crc kubenswrapper[4718]: I0123 16:38:41.835184 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d386334-9880-4350-85e7-677ae774bbfe" containerName="ceilometer-notification-agent" containerID="cri-o://b16239793bd24103e3b05e1f2a4de5bca0050368af76f5f4740bbb7571f4fc9a" gracePeriod=30 Jan 23 16:38:42 crc kubenswrapper[4718]: I0123 16:38:42.368973 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-s285q"] Jan 23 16:38:42 crc kubenswrapper[4718]: W0123 16:38:42.383060 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf77eff7_55fd_45ad_8e28_7c437167cc0b.slice/crio-7aae85f811e180a4197e5437a48f698b0ad4f24233790c8a4bf8f44c742a24a8 WatchSource:0}: Error finding container 7aae85f811e180a4197e5437a48f698b0ad4f24233790c8a4bf8f44c742a24a8: Status 404 returned error can't find the container with id 7aae85f811e180a4197e5437a48f698b0ad4f24233790c8a4bf8f44c742a24a8 Jan 23 16:38:42 crc kubenswrapper[4718]: I0123 16:38:42.823038 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s285q" event={"ID":"cf77eff7-55fd-45ad-8e28-7c437167cc0b","Type":"ContainerStarted","Data":"7aae85f811e180a4197e5437a48f698b0ad4f24233790c8a4bf8f44c742a24a8"} Jan 23 16:38:42 crc kubenswrapper[4718]: I0123 16:38:42.860277 4718 generic.go:334] "Generic (PLEG): container finished" podID="2d386334-9880-4350-85e7-677ae774bbfe" containerID="3923574407ad2675f8d4a2a15bc52847c53df28e40ba6d73c53b48b2fb157c1a" exitCode=0 Jan 23 16:38:42 crc kubenswrapper[4718]: I0123 16:38:42.860318 4718 generic.go:334] "Generic (PLEG): container finished" podID="2d386334-9880-4350-85e7-677ae774bbfe" containerID="7b3d3bba1c12fc4b4a83848d3a67b253d7d522ff10304993d1eeab0521eaa910" exitCode=2 Jan 23 16:38:42 crc kubenswrapper[4718]: I0123 16:38:42.860329 4718 generic.go:334] "Generic (PLEG): container finished" podID="2d386334-9880-4350-85e7-677ae774bbfe" containerID="b16239793bd24103e3b05e1f2a4de5bca0050368af76f5f4740bbb7571f4fc9a" exitCode=0 Jan 23 16:38:42 crc kubenswrapper[4718]: I0123 16:38:42.860336 4718 generic.go:334] "Generic (PLEG): container finished" podID="2d386334-9880-4350-85e7-677ae774bbfe" containerID="752f4a41ec586686185358e6698c9dea6986463f8b5bdae70f873e0e9a8d1f9b" exitCode=0 Jan 23 16:38:42 crc kubenswrapper[4718]: I0123 16:38:42.860409 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d386334-9880-4350-85e7-677ae774bbfe","Type":"ContainerDied","Data":"3923574407ad2675f8d4a2a15bc52847c53df28e40ba6d73c53b48b2fb157c1a"} Jan 23 16:38:42 crc kubenswrapper[4718]: I0123 16:38:42.860440 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d386334-9880-4350-85e7-677ae774bbfe","Type":"ContainerDied","Data":"7b3d3bba1c12fc4b4a83848d3a67b253d7d522ff10304993d1eeab0521eaa910"} Jan 23 16:38:42 crc kubenswrapper[4718]: I0123 16:38:42.860451 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d386334-9880-4350-85e7-677ae774bbfe","Type":"ContainerDied","Data":"b16239793bd24103e3b05e1f2a4de5bca0050368af76f5f4740bbb7571f4fc9a"} Jan 23 16:38:42 crc kubenswrapper[4718]: I0123 16:38:42.860459 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d386334-9880-4350-85e7-677ae774bbfe","Type":"ContainerDied","Data":"752f4a41ec586686185358e6698c9dea6986463f8b5bdae70f873e0e9a8d1f9b"} Jan 23 16:38:42 crc kubenswrapper[4718]: I0123 16:38:42.874259 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b6b78dc95-9ft97" event={"ID":"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1","Type":"ContainerStarted","Data":"c75f438b99e05e9f3cebc44e312acd98cf19b8d59ada389b38112155ac32daae"} Jan 23 16:38:42 crc kubenswrapper[4718]: I0123 16:38:42.874320 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b6b78dc95-9ft97" event={"ID":"6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1","Type":"ContainerStarted","Data":"5d11af79c7f4b777765adcaef5cd9eeefc8a42d5cf0d0fd757afc9ee164705f7"} Jan 23 16:38:42 crc kubenswrapper[4718]: I0123 16:38:42.874684 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:42 crc kubenswrapper[4718]: I0123 16:38:42.874712 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:42 crc kubenswrapper[4718]: I0123 16:38:42.926518 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5b6b78dc95-9ft97" podStartSLOduration=2.926493237 podStartE2EDuration="2.926493237s" podCreationTimestamp="2026-01-23 16:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:38:42.905940419 +0000 UTC m=+1324.053182400" watchObservedRunningTime="2026-01-23 16:38:42.926493237 +0000 UTC m=+1324.073735228" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.070848 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.202450 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-scripts\") pod \"2d386334-9880-4350-85e7-677ae774bbfe\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.202578 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q52m7\" (UniqueName: \"kubernetes.io/projected/2d386334-9880-4350-85e7-677ae774bbfe-kube-api-access-q52m7\") pod \"2d386334-9880-4350-85e7-677ae774bbfe\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.202712 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-sg-core-conf-yaml\") pod \"2d386334-9880-4350-85e7-677ae774bbfe\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.202742 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d386334-9880-4350-85e7-677ae774bbfe-log-httpd\") pod \"2d386334-9880-4350-85e7-677ae774bbfe\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.202860 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d386334-9880-4350-85e7-677ae774bbfe-run-httpd\") pod \"2d386334-9880-4350-85e7-677ae774bbfe\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.202895 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-combined-ca-bundle\") pod \"2d386334-9880-4350-85e7-677ae774bbfe\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.203010 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-config-data\") pod \"2d386334-9880-4350-85e7-677ae774bbfe\" (UID: \"2d386334-9880-4350-85e7-677ae774bbfe\") " Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.206944 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d386334-9880-4350-85e7-677ae774bbfe-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2d386334-9880-4350-85e7-677ae774bbfe" (UID: "2d386334-9880-4350-85e7-677ae774bbfe"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.207384 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d386334-9880-4350-85e7-677ae774bbfe-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2d386334-9880-4350-85e7-677ae774bbfe" (UID: "2d386334-9880-4350-85e7-677ae774bbfe"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.214864 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d386334-9880-4350-85e7-677ae774bbfe-kube-api-access-q52m7" (OuterVolumeSpecName: "kube-api-access-q52m7") pod "2d386334-9880-4350-85e7-677ae774bbfe" (UID: "2d386334-9880-4350-85e7-677ae774bbfe"). InnerVolumeSpecName "kube-api-access-q52m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.226793 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-scripts" (OuterVolumeSpecName: "scripts") pod "2d386334-9880-4350-85e7-677ae774bbfe" (UID: "2d386334-9880-4350-85e7-677ae774bbfe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.273653 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2d386334-9880-4350-85e7-677ae774bbfe" (UID: "2d386334-9880-4350-85e7-677ae774bbfe"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.310557 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.310604 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q52m7\" (UniqueName: \"kubernetes.io/projected/2d386334-9880-4350-85e7-677ae774bbfe-kube-api-access-q52m7\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.310623 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.310649 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d386334-9880-4350-85e7-677ae774bbfe-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.310658 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d386334-9880-4350-85e7-677ae774bbfe-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.328387 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d386334-9880-4350-85e7-677ae774bbfe" (UID: "2d386334-9880-4350-85e7-677ae774bbfe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.359473 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-config-data" (OuterVolumeSpecName: "config-data") pod "2d386334-9880-4350-85e7-677ae774bbfe" (UID: "2d386334-9880-4350-85e7-677ae774bbfe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.412424 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.412456 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d386334-9880-4350-85e7-677ae774bbfe-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.890752 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d386334-9880-4350-85e7-677ae774bbfe","Type":"ContainerDied","Data":"53420e58f1dc57f3eb13a65dab9fde4ea2e198c2daac47b807bd90065581f50e"} Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.892146 4718 scope.go:117] "RemoveContainer" containerID="3923574407ad2675f8d4a2a15bc52847c53df28e40ba6d73c53b48b2fb157c1a" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.891902 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.911837 4718 generic.go:334] "Generic (PLEG): container finished" podID="cf77eff7-55fd-45ad-8e28-7c437167cc0b" containerID="3dc19c218f202ae57b0754158be96e3643674a265d0cd8429c4401ce2134dc5d" exitCode=0 Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.911922 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s285q" event={"ID":"cf77eff7-55fd-45ad-8e28-7c437167cc0b","Type":"ContainerDied","Data":"3dc19c218f202ae57b0754158be96e3643674a265d0cd8429c4401ce2134dc5d"} Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.961513 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.976928 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.989649 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:38:43 crc kubenswrapper[4718]: E0123 16:38:43.990132 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d386334-9880-4350-85e7-677ae774bbfe" containerName="proxy-httpd" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.990151 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d386334-9880-4350-85e7-677ae774bbfe" containerName="proxy-httpd" Jan 23 16:38:43 crc kubenswrapper[4718]: E0123 16:38:43.990171 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d386334-9880-4350-85e7-677ae774bbfe" containerName="sg-core" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.990177 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d386334-9880-4350-85e7-677ae774bbfe" containerName="sg-core" Jan 23 16:38:43 crc kubenswrapper[4718]: E0123 16:38:43.990218 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d386334-9880-4350-85e7-677ae774bbfe" containerName="ceilometer-central-agent" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.990225 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d386334-9880-4350-85e7-677ae774bbfe" containerName="ceilometer-central-agent" Jan 23 16:38:43 crc kubenswrapper[4718]: E0123 16:38:43.990243 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d386334-9880-4350-85e7-677ae774bbfe" containerName="ceilometer-notification-agent" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.990250 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d386334-9880-4350-85e7-677ae774bbfe" containerName="ceilometer-notification-agent" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.992157 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d386334-9880-4350-85e7-677ae774bbfe" containerName="ceilometer-central-agent" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.992182 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d386334-9880-4350-85e7-677ae774bbfe" containerName="proxy-httpd" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.992202 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d386334-9880-4350-85e7-677ae774bbfe" containerName="ceilometer-notification-agent" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.992214 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d386334-9880-4350-85e7-677ae774bbfe" containerName="sg-core" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.995155 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.998712 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 16:38:43 crc kubenswrapper[4718]: I0123 16:38:43.998723 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.010112 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.162160 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.162230 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22073987-e907-4f8b-95a3-bf9534ce1a38-run-httpd\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.162253 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh9jc\" (UniqueName: \"kubernetes.io/projected/22073987-e907-4f8b-95a3-bf9534ce1a38-kube-api-access-hh9jc\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.162330 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22073987-e907-4f8b-95a3-bf9534ce1a38-log-httpd\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.162362 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.162377 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-scripts\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.162471 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-config-data\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.264359 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-config-data\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.264741 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.264779 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22073987-e907-4f8b-95a3-bf9534ce1a38-run-httpd\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.264801 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh9jc\" (UniqueName: \"kubernetes.io/projected/22073987-e907-4f8b-95a3-bf9534ce1a38-kube-api-access-hh9jc\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.264892 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22073987-e907-4f8b-95a3-bf9534ce1a38-log-httpd\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.264914 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.264929 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-scripts\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.266257 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22073987-e907-4f8b-95a3-bf9534ce1a38-run-httpd\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.267681 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22073987-e907-4f8b-95a3-bf9534ce1a38-log-httpd\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.273857 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-config-data\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.275455 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.275883 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.280554 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-scripts\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.286556 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh9jc\" (UniqueName: \"kubernetes.io/projected/22073987-e907-4f8b-95a3-bf9534ce1a38-kube-api-access-hh9jc\") pod \"ceilometer-0\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " pod="openstack/ceilometer-0" Jan 23 16:38:44 crc kubenswrapper[4718]: I0123 16:38:44.369322 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:38:45 crc kubenswrapper[4718]: I0123 16:38:45.156432 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d386334-9880-4350-85e7-677ae774bbfe" path="/var/lib/kubelet/pods/2d386334-9880-4350-85e7-677ae774bbfe/volumes" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.340256 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7cd7745668-858qv"] Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.342191 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7cd7745668-858qv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.347104 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.348814 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-sjrc6" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.351853 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.384877 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7cd7745668-858qv"] Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.465277 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-7tbwv"] Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.467067 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.480454 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-7tbwv"] Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.533420 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e281000-f117-4ab6-8a56-a741d57ac660-config-data-custom\") pod \"heat-engine-7cd7745668-858qv\" (UID: \"0e281000-f117-4ab6-8a56-a741d57ac660\") " pod="openstack/heat-engine-7cd7745668-858qv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.533493 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e281000-f117-4ab6-8a56-a741d57ac660-combined-ca-bundle\") pod \"heat-engine-7cd7745668-858qv\" (UID: \"0e281000-f117-4ab6-8a56-a741d57ac660\") " pod="openstack/heat-engine-7cd7745668-858qv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.533551 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e281000-f117-4ab6-8a56-a741d57ac660-config-data\") pod \"heat-engine-7cd7745668-858qv\" (UID: \"0e281000-f117-4ab6-8a56-a741d57ac660\") " pod="openstack/heat-engine-7cd7745668-858qv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.533593 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txwhp\" (UniqueName: \"kubernetes.io/projected/0e281000-f117-4ab6-8a56-a741d57ac660-kube-api-access-txwhp\") pod \"heat-engine-7cd7745668-858qv\" (UID: \"0e281000-f117-4ab6-8a56-a741d57ac660\") " pod="openstack/heat-engine-7cd7745668-858qv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.582183 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-567d9cb85d-xx4vp"] Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.586227 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.623168 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.680258 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e281000-f117-4ab6-8a56-a741d57ac660-config-data-custom\") pod \"heat-engine-7cd7745668-858qv\" (UID: \"0e281000-f117-4ab6-8a56-a741d57ac660\") " pod="openstack/heat-engine-7cd7745668-858qv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.680365 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-7tbwv\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.680403 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e281000-f117-4ab6-8a56-a741d57ac660-combined-ca-bundle\") pod \"heat-engine-7cd7745668-858qv\" (UID: \"0e281000-f117-4ab6-8a56-a741d57ac660\") " pod="openstack/heat-engine-7cd7745668-858qv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.680433 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945685c3-965d-45a3-b1dc-f1fea0a489dc-combined-ca-bundle\") pod \"heat-cfnapi-567d9cb85d-xx4vp\" (UID: \"945685c3-965d-45a3-b1dc-f1fea0a489dc\") " pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.680489 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-config\") pod \"dnsmasq-dns-688b9f5b49-7tbwv\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.680513 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-7tbwv\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.680560 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e281000-f117-4ab6-8a56-a741d57ac660-config-data\") pod \"heat-engine-7cd7745668-858qv\" (UID: \"0e281000-f117-4ab6-8a56-a741d57ac660\") " pod="openstack/heat-engine-7cd7745668-858qv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.680600 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj5t4\" (UniqueName: \"kubernetes.io/projected/3ae9ff45-f144-444f-b736-40cc69a7bda0-kube-api-access-bj5t4\") pod \"dnsmasq-dns-688b9f5b49-7tbwv\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.743465 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e281000-f117-4ab6-8a56-a741d57ac660-config-data-custom\") pod \"heat-engine-7cd7745668-858qv\" (UID: \"0e281000-f117-4ab6-8a56-a741d57ac660\") " pod="openstack/heat-engine-7cd7745668-858qv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.753350 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txwhp\" (UniqueName: \"kubernetes.io/projected/0e281000-f117-4ab6-8a56-a741d57ac660-kube-api-access-txwhp\") pod \"heat-engine-7cd7745668-858qv\" (UID: \"0e281000-f117-4ab6-8a56-a741d57ac660\") " pod="openstack/heat-engine-7cd7745668-858qv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.753501 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-7tbwv\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.753560 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-7tbwv\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.753656 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nwjs\" (UniqueName: \"kubernetes.io/projected/945685c3-965d-45a3-b1dc-f1fea0a489dc-kube-api-access-9nwjs\") pod \"heat-cfnapi-567d9cb85d-xx4vp\" (UID: \"945685c3-965d-45a3-b1dc-f1fea0a489dc\") " pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.753785 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/945685c3-965d-45a3-b1dc-f1fea0a489dc-config-data-custom\") pod \"heat-cfnapi-567d9cb85d-xx4vp\" (UID: \"945685c3-965d-45a3-b1dc-f1fea0a489dc\") " pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.753371 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-567d9cb85d-xx4vp"] Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.755754 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e281000-f117-4ab6-8a56-a741d57ac660-config-data\") pod \"heat-engine-7cd7745668-858qv\" (UID: \"0e281000-f117-4ab6-8a56-a741d57ac660\") " pod="openstack/heat-engine-7cd7745668-858qv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.769915 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/945685c3-965d-45a3-b1dc-f1fea0a489dc-config-data\") pod \"heat-cfnapi-567d9cb85d-xx4vp\" (UID: \"945685c3-965d-45a3-b1dc-f1fea0a489dc\") " pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.815470 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e281000-f117-4ab6-8a56-a741d57ac660-combined-ca-bundle\") pod \"heat-engine-7cd7745668-858qv\" (UID: \"0e281000-f117-4ab6-8a56-a741d57ac660\") " pod="openstack/heat-engine-7cd7745668-858qv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.820094 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7dc8889c5c-4lxb7"] Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.843818 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7dc8889c5c-4lxb7" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.846563 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txwhp\" (UniqueName: \"kubernetes.io/projected/0e281000-f117-4ab6-8a56-a741d57ac660-kube-api-access-txwhp\") pod \"heat-engine-7cd7745668-858qv\" (UID: \"0e281000-f117-4ab6-8a56-a741d57ac660\") " pod="openstack/heat-engine-7cd7745668-858qv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.847912 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.899615 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-config\") pod \"dnsmasq-dns-688b9f5b49-7tbwv\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.899674 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-7tbwv\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.899721 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj5t4\" (UniqueName: \"kubernetes.io/projected/3ae9ff45-f144-444f-b736-40cc69a7bda0-kube-api-access-bj5t4\") pod \"dnsmasq-dns-688b9f5b49-7tbwv\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.899755 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-7tbwv\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.899772 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-7tbwv\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.899799 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nwjs\" (UniqueName: \"kubernetes.io/projected/945685c3-965d-45a3-b1dc-f1fea0a489dc-kube-api-access-9nwjs\") pod \"heat-cfnapi-567d9cb85d-xx4vp\" (UID: \"945685c3-965d-45a3-b1dc-f1fea0a489dc\") " pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.899834 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/945685c3-965d-45a3-b1dc-f1fea0a489dc-config-data-custom\") pod \"heat-cfnapi-567d9cb85d-xx4vp\" (UID: \"945685c3-965d-45a3-b1dc-f1fea0a489dc\") " pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.899867 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/945685c3-965d-45a3-b1dc-f1fea0a489dc-config-data\") pod \"heat-cfnapi-567d9cb85d-xx4vp\" (UID: \"945685c3-965d-45a3-b1dc-f1fea0a489dc\") " pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.899937 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-7tbwv\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.899965 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945685c3-965d-45a3-b1dc-f1fea0a489dc-combined-ca-bundle\") pod \"heat-cfnapi-567d9cb85d-xx4vp\" (UID: \"945685c3-965d-45a3-b1dc-f1fea0a489dc\") " pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.901573 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-7tbwv\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.902101 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-config\") pod \"dnsmasq-dns-688b9f5b49-7tbwv\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.902596 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-7tbwv\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.907973 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-7tbwv\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.909930 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-7tbwv\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.925579 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945685c3-965d-45a3-b1dc-f1fea0a489dc-combined-ca-bundle\") pod \"heat-cfnapi-567d9cb85d-xx4vp\" (UID: \"945685c3-965d-45a3-b1dc-f1fea0a489dc\") " pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.928195 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7dc8889c5c-4lxb7"] Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.953152 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/945685c3-965d-45a3-b1dc-f1fea0a489dc-config-data\") pod \"heat-cfnapi-567d9cb85d-xx4vp\" (UID: \"945685c3-965d-45a3-b1dc-f1fea0a489dc\") " pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.954092 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/945685c3-965d-45a3-b1dc-f1fea0a489dc-config-data-custom\") pod \"heat-cfnapi-567d9cb85d-xx4vp\" (UID: \"945685c3-965d-45a3-b1dc-f1fea0a489dc\") " pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.958913 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj5t4\" (UniqueName: \"kubernetes.io/projected/3ae9ff45-f144-444f-b736-40cc69a7bda0-kube-api-access-bj5t4\") pod \"dnsmasq-dns-688b9f5b49-7tbwv\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.960595 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nwjs\" (UniqueName: \"kubernetes.io/projected/945685c3-965d-45a3-b1dc-f1fea0a489dc-kube-api-access-9nwjs\") pod \"heat-cfnapi-567d9cb85d-xx4vp\" (UID: \"945685c3-965d-45a3-b1dc-f1fea0a489dc\") " pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" Jan 23 16:38:46 crc kubenswrapper[4718]: I0123 16:38:46.995214 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7cd7745668-858qv" Jan 23 16:38:47 crc kubenswrapper[4718]: I0123 16:38:47.003372 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-config-data\") pod \"heat-api-7dc8889c5c-4lxb7\" (UID: \"0afc8879-60d8-4d63-9784-e408e7e46ec8\") " pod="openstack/heat-api-7dc8889c5c-4lxb7" Jan 23 16:38:47 crc kubenswrapper[4718]: I0123 16:38:47.003548 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-config-data-custom\") pod \"heat-api-7dc8889c5c-4lxb7\" (UID: \"0afc8879-60d8-4d63-9784-e408e7e46ec8\") " pod="openstack/heat-api-7dc8889c5c-4lxb7" Jan 23 16:38:47 crc kubenswrapper[4718]: I0123 16:38:47.003582 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-combined-ca-bundle\") pod \"heat-api-7dc8889c5c-4lxb7\" (UID: \"0afc8879-60d8-4d63-9784-e408e7e46ec8\") " pod="openstack/heat-api-7dc8889c5c-4lxb7" Jan 23 16:38:47 crc kubenswrapper[4718]: I0123 16:38:47.003673 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z822r\" (UniqueName: \"kubernetes.io/projected/0afc8879-60d8-4d63-9784-e408e7e46ec8-kube-api-access-z822r\") pod \"heat-api-7dc8889c5c-4lxb7\" (UID: \"0afc8879-60d8-4d63-9784-e408e7e46ec8\") " pod="openstack/heat-api-7dc8889c5c-4lxb7" Jan 23 16:38:47 crc kubenswrapper[4718]: I0123 16:38:47.106424 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-config-data-custom\") pod \"heat-api-7dc8889c5c-4lxb7\" (UID: \"0afc8879-60d8-4d63-9784-e408e7e46ec8\") " pod="openstack/heat-api-7dc8889c5c-4lxb7" Jan 23 16:38:47 crc kubenswrapper[4718]: I0123 16:38:47.106472 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-combined-ca-bundle\") pod \"heat-api-7dc8889c5c-4lxb7\" (UID: \"0afc8879-60d8-4d63-9784-e408e7e46ec8\") " pod="openstack/heat-api-7dc8889c5c-4lxb7" Jan 23 16:38:47 crc kubenswrapper[4718]: I0123 16:38:47.106528 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z822r\" (UniqueName: \"kubernetes.io/projected/0afc8879-60d8-4d63-9784-e408e7e46ec8-kube-api-access-z822r\") pod \"heat-api-7dc8889c5c-4lxb7\" (UID: \"0afc8879-60d8-4d63-9784-e408e7e46ec8\") " pod="openstack/heat-api-7dc8889c5c-4lxb7" Jan 23 16:38:47 crc kubenswrapper[4718]: I0123 16:38:47.106647 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-config-data\") pod \"heat-api-7dc8889c5c-4lxb7\" (UID: \"0afc8879-60d8-4d63-9784-e408e7e46ec8\") " pod="openstack/heat-api-7dc8889c5c-4lxb7" Jan 23 16:38:47 crc kubenswrapper[4718]: I0123 16:38:47.112523 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-config-data-custom\") pod \"heat-api-7dc8889c5c-4lxb7\" (UID: \"0afc8879-60d8-4d63-9784-e408e7e46ec8\") " pod="openstack/heat-api-7dc8889c5c-4lxb7" Jan 23 16:38:47 crc kubenswrapper[4718]: I0123 16:38:47.113234 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-config-data\") pod \"heat-api-7dc8889c5c-4lxb7\" (UID: \"0afc8879-60d8-4d63-9784-e408e7e46ec8\") " pod="openstack/heat-api-7dc8889c5c-4lxb7" Jan 23 16:38:47 crc kubenswrapper[4718]: I0123 16:38:47.114284 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-combined-ca-bundle\") pod \"heat-api-7dc8889c5c-4lxb7\" (UID: \"0afc8879-60d8-4d63-9784-e408e7e46ec8\") " pod="openstack/heat-api-7dc8889c5c-4lxb7" Jan 23 16:38:47 crc kubenswrapper[4718]: I0123 16:38:47.120940 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 16:38:47 crc kubenswrapper[4718]: I0123 16:38:47.129277 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z822r\" (UniqueName: \"kubernetes.io/projected/0afc8879-60d8-4d63-9784-e408e7e46ec8-kube-api-access-z822r\") pod \"heat-api-7dc8889c5c-4lxb7\" (UID: \"0afc8879-60d8-4d63-9784-e408e7e46ec8\") " pod="openstack/heat-api-7dc8889c5c-4lxb7" Jan 23 16:38:47 crc kubenswrapper[4718]: I0123 16:38:47.164533 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:47 crc kubenswrapper[4718]: I0123 16:38:47.256715 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" Jan 23 16:38:47 crc kubenswrapper[4718]: I0123 16:38:47.325466 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7dc8889c5c-4lxb7" Jan 23 16:38:48 crc kubenswrapper[4718]: I0123 16:38:48.097144 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8567b78dd5-chd6w" Jan 23 16:38:48 crc kubenswrapper[4718]: I0123 16:38:48.187034 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-79d7cd7f96-8w9s4"] Jan 23 16:38:48 crc kubenswrapper[4718]: I0123 16:38:48.190353 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-79d7cd7f96-8w9s4" podUID="5ce5910e-d662-4c66-a349-684b2d98509c" containerName="neutron-api" containerID="cri-o://208e15d0e6f071083c884987d678627e340c6690645f9772f8c5aef7aa6fc9d3" gracePeriod=30 Jan 23 16:38:48 crc kubenswrapper[4718]: I0123 16:38:48.191244 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-79d7cd7f96-8w9s4" podUID="5ce5910e-d662-4c66-a349-684b2d98509c" containerName="neutron-httpd" containerID="cri-o://861b8bc546231af488bdfaa720f2977ae98838f84ff3b3e72bc1366bc74f1e5a" gracePeriod=30 Jan 23 16:38:49 crc kubenswrapper[4718]: I0123 16:38:49.012771 4718 generic.go:334] "Generic (PLEG): container finished" podID="5ce5910e-d662-4c66-a349-684b2d98509c" containerID="861b8bc546231af488bdfaa720f2977ae98838f84ff3b3e72bc1366bc74f1e5a" exitCode=0 Jan 23 16:38:49 crc kubenswrapper[4718]: I0123 16:38:49.012820 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79d7cd7f96-8w9s4" event={"ID":"5ce5910e-d662-4c66-a349-684b2d98509c","Type":"ContainerDied","Data":"861b8bc546231af488bdfaa720f2977ae98838f84ff3b3e72bc1366bc74f1e5a"} Jan 23 16:38:50 crc kubenswrapper[4718]: I0123 16:38:50.793489 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:50 crc kubenswrapper[4718]: I0123 16:38:50.801865 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5b6b78dc95-9ft97" Jan 23 16:38:51 crc kubenswrapper[4718]: I0123 16:38:51.203558 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.052494 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7b6b6b4c97-tcz2k"] Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.054817 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b6b6b4c97-tcz2k" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.092994 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7b6b6b4c97-tcz2k"] Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.199719 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6cb6c6d8c4-hwj8v"] Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.201409 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.217710 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-844bdc55c5-4kc9t"] Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.219784 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.221112 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0928142a-4c90-4e86-9e29-8e7abb282cf0-config-data-custom\") pod \"heat-engine-7b6b6b4c97-tcz2k\" (UID: \"0928142a-4c90-4e86-9e29-8e7abb282cf0\") " pod="openstack/heat-engine-7b6b6b4c97-tcz2k" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.221236 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0928142a-4c90-4e86-9e29-8e7abb282cf0-config-data\") pod \"heat-engine-7b6b6b4c97-tcz2k\" (UID: \"0928142a-4c90-4e86-9e29-8e7abb282cf0\") " pod="openstack/heat-engine-7b6b6b4c97-tcz2k" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.221297 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0928142a-4c90-4e86-9e29-8e7abb282cf0-combined-ca-bundle\") pod \"heat-engine-7b6b6b4c97-tcz2k\" (UID: \"0928142a-4c90-4e86-9e29-8e7abb282cf0\") " pod="openstack/heat-engine-7b6b6b4c97-tcz2k" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.221458 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2g4f\" (UniqueName: \"kubernetes.io/projected/0928142a-4c90-4e86-9e29-8e7abb282cf0-kube-api-access-n2g4f\") pod \"heat-engine-7b6b6b4c97-tcz2k\" (UID: \"0928142a-4c90-4e86-9e29-8e7abb282cf0\") " pod="openstack/heat-engine-7b6b6b4c97-tcz2k" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.234048 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6cb6c6d8c4-hwj8v"] Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.266750 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-844bdc55c5-4kc9t"] Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.323265 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817285ad-e89c-4123-b42a-b622631062cd-combined-ca-bundle\") pod \"heat-cfnapi-6cb6c6d8c4-hwj8v\" (UID: \"817285ad-e89c-4123-b42a-b622631062cd\") " pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.323351 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1f0df76-9570-4514-8a62-3986fb03976b-config-data-custom\") pod \"heat-api-844bdc55c5-4kc9t\" (UID: \"d1f0df76-9570-4514-8a62-3986fb03976b\") " pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.323397 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f0df76-9570-4514-8a62-3986fb03976b-combined-ca-bundle\") pod \"heat-api-844bdc55c5-4kc9t\" (UID: \"d1f0df76-9570-4514-8a62-3986fb03976b\") " pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.323703 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jnj6\" (UniqueName: \"kubernetes.io/projected/d1f0df76-9570-4514-8a62-3986fb03976b-kube-api-access-5jnj6\") pod \"heat-api-844bdc55c5-4kc9t\" (UID: \"d1f0df76-9570-4514-8a62-3986fb03976b\") " pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.323785 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0928142a-4c90-4e86-9e29-8e7abb282cf0-config-data-custom\") pod \"heat-engine-7b6b6b4c97-tcz2k\" (UID: \"0928142a-4c90-4e86-9e29-8e7abb282cf0\") " pod="openstack/heat-engine-7b6b6b4c97-tcz2k" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.324105 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0928142a-4c90-4e86-9e29-8e7abb282cf0-config-data\") pod \"heat-engine-7b6b6b4c97-tcz2k\" (UID: \"0928142a-4c90-4e86-9e29-8e7abb282cf0\") " pod="openstack/heat-engine-7b6b6b4c97-tcz2k" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.324172 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817285ad-e89c-4123-b42a-b622631062cd-config-data\") pod \"heat-cfnapi-6cb6c6d8c4-hwj8v\" (UID: \"817285ad-e89c-4123-b42a-b622631062cd\") " pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.324284 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0928142a-4c90-4e86-9e29-8e7abb282cf0-combined-ca-bundle\") pod \"heat-engine-7b6b6b4c97-tcz2k\" (UID: \"0928142a-4c90-4e86-9e29-8e7abb282cf0\") " pod="openstack/heat-engine-7b6b6b4c97-tcz2k" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.324347 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f0df76-9570-4514-8a62-3986fb03976b-config-data\") pod \"heat-api-844bdc55c5-4kc9t\" (UID: \"d1f0df76-9570-4514-8a62-3986fb03976b\") " pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.324840 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdrcw\" (UniqueName: \"kubernetes.io/projected/817285ad-e89c-4123-b42a-b622631062cd-kube-api-access-pdrcw\") pod \"heat-cfnapi-6cb6c6d8c4-hwj8v\" (UID: \"817285ad-e89c-4123-b42a-b622631062cd\") " pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.325025 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2g4f\" (UniqueName: \"kubernetes.io/projected/0928142a-4c90-4e86-9e29-8e7abb282cf0-kube-api-access-n2g4f\") pod \"heat-engine-7b6b6b4c97-tcz2k\" (UID: \"0928142a-4c90-4e86-9e29-8e7abb282cf0\") " pod="openstack/heat-engine-7b6b6b4c97-tcz2k" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.325197 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/817285ad-e89c-4123-b42a-b622631062cd-config-data-custom\") pod \"heat-cfnapi-6cb6c6d8c4-hwj8v\" (UID: \"817285ad-e89c-4123-b42a-b622631062cd\") " pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.332019 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0928142a-4c90-4e86-9e29-8e7abb282cf0-combined-ca-bundle\") pod \"heat-engine-7b6b6b4c97-tcz2k\" (UID: \"0928142a-4c90-4e86-9e29-8e7abb282cf0\") " pod="openstack/heat-engine-7b6b6b4c97-tcz2k" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.337463 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0928142a-4c90-4e86-9e29-8e7abb282cf0-config-data-custom\") pod \"heat-engine-7b6b6b4c97-tcz2k\" (UID: \"0928142a-4c90-4e86-9e29-8e7abb282cf0\") " pod="openstack/heat-engine-7b6b6b4c97-tcz2k" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.342550 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0928142a-4c90-4e86-9e29-8e7abb282cf0-config-data\") pod \"heat-engine-7b6b6b4c97-tcz2k\" (UID: \"0928142a-4c90-4e86-9e29-8e7abb282cf0\") " pod="openstack/heat-engine-7b6b6b4c97-tcz2k" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.359716 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2g4f\" (UniqueName: \"kubernetes.io/projected/0928142a-4c90-4e86-9e29-8e7abb282cf0-kube-api-access-n2g4f\") pod \"heat-engine-7b6b6b4c97-tcz2k\" (UID: \"0928142a-4c90-4e86-9e29-8e7abb282cf0\") " pod="openstack/heat-engine-7b6b6b4c97-tcz2k" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.398480 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b6b6b4c97-tcz2k" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.427546 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817285ad-e89c-4123-b42a-b622631062cd-config-data\") pod \"heat-cfnapi-6cb6c6d8c4-hwj8v\" (UID: \"817285ad-e89c-4123-b42a-b622631062cd\") " pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.427645 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f0df76-9570-4514-8a62-3986fb03976b-config-data\") pod \"heat-api-844bdc55c5-4kc9t\" (UID: \"d1f0df76-9570-4514-8a62-3986fb03976b\") " pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.427733 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdrcw\" (UniqueName: \"kubernetes.io/projected/817285ad-e89c-4123-b42a-b622631062cd-kube-api-access-pdrcw\") pod \"heat-cfnapi-6cb6c6d8c4-hwj8v\" (UID: \"817285ad-e89c-4123-b42a-b622631062cd\") " pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.427788 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/817285ad-e89c-4123-b42a-b622631062cd-config-data-custom\") pod \"heat-cfnapi-6cb6c6d8c4-hwj8v\" (UID: \"817285ad-e89c-4123-b42a-b622631062cd\") " pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.427844 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817285ad-e89c-4123-b42a-b622631062cd-combined-ca-bundle\") pod \"heat-cfnapi-6cb6c6d8c4-hwj8v\" (UID: \"817285ad-e89c-4123-b42a-b622631062cd\") " pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.427925 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1f0df76-9570-4514-8a62-3986fb03976b-config-data-custom\") pod \"heat-api-844bdc55c5-4kc9t\" (UID: \"d1f0df76-9570-4514-8a62-3986fb03976b\") " pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.427965 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f0df76-9570-4514-8a62-3986fb03976b-combined-ca-bundle\") pod \"heat-api-844bdc55c5-4kc9t\" (UID: \"d1f0df76-9570-4514-8a62-3986fb03976b\") " pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.428019 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jnj6\" (UniqueName: \"kubernetes.io/projected/d1f0df76-9570-4514-8a62-3986fb03976b-kube-api-access-5jnj6\") pod \"heat-api-844bdc55c5-4kc9t\" (UID: \"d1f0df76-9570-4514-8a62-3986fb03976b\") " pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.434700 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f0df76-9570-4514-8a62-3986fb03976b-combined-ca-bundle\") pod \"heat-api-844bdc55c5-4kc9t\" (UID: \"d1f0df76-9570-4514-8a62-3986fb03976b\") " pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.434989 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817285ad-e89c-4123-b42a-b622631062cd-config-data\") pod \"heat-cfnapi-6cb6c6d8c4-hwj8v\" (UID: \"817285ad-e89c-4123-b42a-b622631062cd\") " pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.437455 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1f0df76-9570-4514-8a62-3986fb03976b-config-data-custom\") pod \"heat-api-844bdc55c5-4kc9t\" (UID: \"d1f0df76-9570-4514-8a62-3986fb03976b\") " pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.437728 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f0df76-9570-4514-8a62-3986fb03976b-config-data\") pod \"heat-api-844bdc55c5-4kc9t\" (UID: \"d1f0df76-9570-4514-8a62-3986fb03976b\") " pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.440864 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817285ad-e89c-4123-b42a-b622631062cd-combined-ca-bundle\") pod \"heat-cfnapi-6cb6c6d8c4-hwj8v\" (UID: \"817285ad-e89c-4123-b42a-b622631062cd\") " pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.446285 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/817285ad-e89c-4123-b42a-b622631062cd-config-data-custom\") pod \"heat-cfnapi-6cb6c6d8c4-hwj8v\" (UID: \"817285ad-e89c-4123-b42a-b622631062cd\") " pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.446467 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jnj6\" (UniqueName: \"kubernetes.io/projected/d1f0df76-9570-4514-8a62-3986fb03976b-kube-api-access-5jnj6\") pod \"heat-api-844bdc55c5-4kc9t\" (UID: \"d1f0df76-9570-4514-8a62-3986fb03976b\") " pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.456092 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdrcw\" (UniqueName: \"kubernetes.io/projected/817285ad-e89c-4123-b42a-b622631062cd-kube-api-access-pdrcw\") pod \"heat-cfnapi-6cb6c6d8c4-hwj8v\" (UID: \"817285ad-e89c-4123-b42a-b622631062cd\") " pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.531640 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.545347 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:38:53 crc kubenswrapper[4718]: I0123 16:38:53.737782 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="29e5d971-73b4-4847-a650-4de4832ffdd6" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.216:8776/healthcheck\": dial tcp 10.217.0.216:8776: connect: connection refused" Jan 23 16:38:54 crc kubenswrapper[4718]: I0123 16:38:54.149072 4718 generic.go:334] "Generic (PLEG): container finished" podID="5ce5910e-d662-4c66-a349-684b2d98509c" containerID="208e15d0e6f071083c884987d678627e340c6690645f9772f8c5aef7aa6fc9d3" exitCode=0 Jan 23 16:38:54 crc kubenswrapper[4718]: I0123 16:38:54.149158 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79d7cd7f96-8w9s4" event={"ID":"5ce5910e-d662-4c66-a349-684b2d98509c","Type":"ContainerDied","Data":"208e15d0e6f071083c884987d678627e340c6690645f9772f8c5aef7aa6fc9d3"} Jan 23 16:38:54 crc kubenswrapper[4718]: I0123 16:38:54.152376 4718 generic.go:334] "Generic (PLEG): container finished" podID="29e5d971-73b4-4847-a650-4de4832ffdd6" containerID="6f0f4f12cfcefeb95c63a439038340192c6d6ae42a05837cbd0bfcaee252585f" exitCode=137 Jan 23 16:38:54 crc kubenswrapper[4718]: I0123 16:38:54.152411 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29e5d971-73b4-4847-a650-4de4832ffdd6","Type":"ContainerDied","Data":"6f0f4f12cfcefeb95c63a439038340192c6d6ae42a05837cbd0bfcaee252585f"} Jan 23 16:38:54 crc kubenswrapper[4718]: E0123 16:38:54.268788 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 23 16:38:54 crc kubenswrapper[4718]: E0123 16:38:54.268984 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6ch5cbh55ch59fh5c6hfdh687h58h546h55ch655h5c8h68dhf4h59hddh688h699h556hf7h574hbh589h5fbh7fh57h64fh58fh5fh5d8h9fh646q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rr27q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(ed1d44ad-8796-452b-a194-17b351fc8c01): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:38:54 crc kubenswrapper[4718]: E0123 16:38:54.270900 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="ed1d44ad-8796-452b-a194-17b351fc8c01" Jan 23 16:38:54 crc kubenswrapper[4718]: I0123 16:38:54.339935 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s285q" Jan 23 16:38:54 crc kubenswrapper[4718]: I0123 16:38:54.506178 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf77eff7-55fd-45ad-8e28-7c437167cc0b-operator-scripts\") pod \"cf77eff7-55fd-45ad-8e28-7c437167cc0b\" (UID: \"cf77eff7-55fd-45ad-8e28-7c437167cc0b\") " Jan 23 16:38:54 crc kubenswrapper[4718]: I0123 16:38:54.506723 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rcf8\" (UniqueName: \"kubernetes.io/projected/cf77eff7-55fd-45ad-8e28-7c437167cc0b-kube-api-access-8rcf8\") pod \"cf77eff7-55fd-45ad-8e28-7c437167cc0b\" (UID: \"cf77eff7-55fd-45ad-8e28-7c437167cc0b\") " Jan 23 16:38:54 crc kubenswrapper[4718]: I0123 16:38:54.510476 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf77eff7-55fd-45ad-8e28-7c437167cc0b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf77eff7-55fd-45ad-8e28-7c437167cc0b" (UID: "cf77eff7-55fd-45ad-8e28-7c437167cc0b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:54 crc kubenswrapper[4718]: I0123 16:38:54.518059 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf77eff7-55fd-45ad-8e28-7c437167cc0b-kube-api-access-8rcf8" (OuterVolumeSpecName: "kube-api-access-8rcf8") pod "cf77eff7-55fd-45ad-8e28-7c437167cc0b" (UID: "cf77eff7-55fd-45ad-8e28-7c437167cc0b"). InnerVolumeSpecName "kube-api-access-8rcf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:54 crc kubenswrapper[4718]: I0123 16:38:54.550173 4718 scope.go:117] "RemoveContainer" containerID="7b3d3bba1c12fc4b4a83848d3a67b253d7d522ff10304993d1eeab0521eaa910" Jan 23 16:38:54 crc kubenswrapper[4718]: I0123 16:38:54.624994 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf77eff7-55fd-45ad-8e28-7c437167cc0b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:54 crc kubenswrapper[4718]: I0123 16:38:54.625042 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rcf8\" (UniqueName: \"kubernetes.io/projected/cf77eff7-55fd-45ad-8e28-7c437167cc0b-kube-api-access-8rcf8\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:54 crc kubenswrapper[4718]: I0123 16:38:54.993363 4718 scope.go:117] "RemoveContainer" containerID="b16239793bd24103e3b05e1f2a4de5bca0050368af76f5f4740bbb7571f4fc9a" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.157295 4718 scope.go:117] "RemoveContainer" containerID="752f4a41ec586686185358e6698c9dea6986463f8b5bdae70f873e0e9a8d1f9b" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.219356 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.223289 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29e5d971-73b4-4847-a650-4de4832ffdd6","Type":"ContainerDied","Data":"942c7fd6ce679a7e62ec996a662aa38d6be37e10151cab9fdd03ea4040ce1148"} Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.239263 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s285q" event={"ID":"cf77eff7-55fd-45ad-8e28-7c437167cc0b","Type":"ContainerDied","Data":"7aae85f811e180a4197e5437a48f698b0ad4f24233790c8a4bf8f44c742a24a8"} Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.239544 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7aae85f811e180a4197e5437a48f698b0ad4f24233790c8a4bf8f44c742a24a8" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.239423 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s285q" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.268326 4718 scope.go:117] "RemoveContainer" containerID="6f0f4f12cfcefeb95c63a439038340192c6d6ae42a05837cbd0bfcaee252585f" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.369122 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln89p\" (UniqueName: \"kubernetes.io/projected/29e5d971-73b4-4847-a650-4de4832ffdd6-kube-api-access-ln89p\") pod \"29e5d971-73b4-4847-a650-4de4832ffdd6\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.369594 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29e5d971-73b4-4847-a650-4de4832ffdd6-etc-machine-id\") pod \"29e5d971-73b4-4847-a650-4de4832ffdd6\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.369622 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e5d971-73b4-4847-a650-4de4832ffdd6-logs\") pod \"29e5d971-73b4-4847-a650-4de4832ffdd6\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.369676 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-scripts\") pod \"29e5d971-73b4-4847-a650-4de4832ffdd6\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.369837 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-config-data\") pod \"29e5d971-73b4-4847-a650-4de4832ffdd6\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.369874 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-combined-ca-bundle\") pod \"29e5d971-73b4-4847-a650-4de4832ffdd6\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.369997 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-config-data-custom\") pod \"29e5d971-73b4-4847-a650-4de4832ffdd6\" (UID: \"29e5d971-73b4-4847-a650-4de4832ffdd6\") " Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.372212 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29e5d971-73b4-4847-a650-4de4832ffdd6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "29e5d971-73b4-4847-a650-4de4832ffdd6" (UID: "29e5d971-73b4-4847-a650-4de4832ffdd6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.373773 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29e5d971-73b4-4847-a650-4de4832ffdd6-logs" (OuterVolumeSpecName: "logs") pod "29e5d971-73b4-4847-a650-4de4832ffdd6" (UID: "29e5d971-73b4-4847-a650-4de4832ffdd6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:38:55 crc kubenswrapper[4718]: E0123 16:38:55.380781 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="ed1d44ad-8796-452b-a194-17b351fc8c01" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.442526 4718 scope.go:117] "RemoveContainer" containerID="ff94ae833b7ed06f3892f28025598035bf83d5fae32541285d25f4e5a42a084e" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.452579 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "29e5d971-73b4-4847-a650-4de4832ffdd6" (UID: "29e5d971-73b4-4847-a650-4de4832ffdd6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.456675 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e5d971-73b4-4847-a650-4de4832ffdd6-kube-api-access-ln89p" (OuterVolumeSpecName: "kube-api-access-ln89p") pod "29e5d971-73b4-4847-a650-4de4832ffdd6" (UID: "29e5d971-73b4-4847-a650-4de4832ffdd6"). InnerVolumeSpecName "kube-api-access-ln89p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.458405 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-scripts" (OuterVolumeSpecName: "scripts") pod "29e5d971-73b4-4847-a650-4de4832ffdd6" (UID: "29e5d971-73b4-4847-a650-4de4832ffdd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.472138 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln89p\" (UniqueName: \"kubernetes.io/projected/29e5d971-73b4-4847-a650-4de4832ffdd6-kube-api-access-ln89p\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.472175 4718 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29e5d971-73b4-4847-a650-4de4832ffdd6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.472185 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e5d971-73b4-4847-a650-4de4832ffdd6-logs\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.472194 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.472203 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.525032 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29e5d971-73b4-4847-a650-4de4832ffdd6" (UID: "29e5d971-73b4-4847-a650-4de4832ffdd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.562963 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-config-data" (OuterVolumeSpecName: "config-data") pod "29e5d971-73b4-4847-a650-4de4832ffdd6" (UID: "29e5d971-73b4-4847-a650-4de4832ffdd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.573975 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.574005 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e5d971-73b4-4847-a650-4de4832ffdd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.701710 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.778305 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nfkt\" (UniqueName: \"kubernetes.io/projected/5ce5910e-d662-4c66-a349-684b2d98509c-kube-api-access-6nfkt\") pod \"5ce5910e-d662-4c66-a349-684b2d98509c\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.778531 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-config\") pod \"5ce5910e-d662-4c66-a349-684b2d98509c\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.778598 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-httpd-config\") pod \"5ce5910e-d662-4c66-a349-684b2d98509c\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.778669 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-combined-ca-bundle\") pod \"5ce5910e-d662-4c66-a349-684b2d98509c\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.778960 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-ovndb-tls-certs\") pod \"5ce5910e-d662-4c66-a349-684b2d98509c\" (UID: \"5ce5910e-d662-4c66-a349-684b2d98509c\") " Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.793500 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce5910e-d662-4c66-a349-684b2d98509c-kube-api-access-6nfkt" (OuterVolumeSpecName: "kube-api-access-6nfkt") pod "5ce5910e-d662-4c66-a349-684b2d98509c" (UID: "5ce5910e-d662-4c66-a349-684b2d98509c"). InnerVolumeSpecName "kube-api-access-6nfkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.796397 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5ce5910e-d662-4c66-a349-684b2d98509c" (UID: "5ce5910e-d662-4c66-a349-684b2d98509c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.883748 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nfkt\" (UniqueName: \"kubernetes.io/projected/5ce5910e-d662-4c66-a349-684b2d98509c-kube-api-access-6nfkt\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.883786 4718 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.906616 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ce5910e-d662-4c66-a349-684b2d98509c" (UID: "5ce5910e-d662-4c66-a349-684b2d98509c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.913847 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-config" (OuterVolumeSpecName: "config") pod "5ce5910e-d662-4c66-a349-684b2d98509c" (UID: "5ce5910e-d662-4c66-a349-684b2d98509c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.966624 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-844bdc55c5-4kc9t"] Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.992300 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:55 crc kubenswrapper[4718]: I0123 16:38:55.992335 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.017885 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7dc8889c5c-4lxb7"] Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.024916 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5ce5910e-d662-4c66-a349-684b2d98509c" (UID: "5ce5910e-d662-4c66-a349-684b2d98509c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.030413 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7dc8889c5c-4lxb7"] Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.042679 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-567d9cb85d-xx4vp"] Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.061789 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7995cbf6b7-gc2m4"] Jan 23 16:38:56 crc kubenswrapper[4718]: E0123 16:38:56.062548 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e5d971-73b4-4847-a650-4de4832ffdd6" containerName="cinder-api" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.062639 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e5d971-73b4-4847-a650-4de4832ffdd6" containerName="cinder-api" Jan 23 16:38:56 crc kubenswrapper[4718]: E0123 16:38:56.062750 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ce5910e-d662-4c66-a349-684b2d98509c" containerName="neutron-httpd" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.062816 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ce5910e-d662-4c66-a349-684b2d98509c" containerName="neutron-httpd" Jan 23 16:38:56 crc kubenswrapper[4718]: E0123 16:38:56.062877 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ce5910e-d662-4c66-a349-684b2d98509c" containerName="neutron-api" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.062944 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ce5910e-d662-4c66-a349-684b2d98509c" containerName="neutron-api" Jan 23 16:38:56 crc kubenswrapper[4718]: E0123 16:38:56.063015 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf77eff7-55fd-45ad-8e28-7c437167cc0b" containerName="mariadb-account-create-update" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.063070 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf77eff7-55fd-45ad-8e28-7c437167cc0b" containerName="mariadb-account-create-update" Jan 23 16:38:56 crc kubenswrapper[4718]: E0123 16:38:56.063138 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e5d971-73b4-4847-a650-4de4832ffdd6" containerName="cinder-api-log" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.063197 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e5d971-73b4-4847-a650-4de4832ffdd6" containerName="cinder-api-log" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.063476 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ce5910e-d662-4c66-a349-684b2d98509c" containerName="neutron-api" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.063550 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e5d971-73b4-4847-a650-4de4832ffdd6" containerName="cinder-api-log" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.063646 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ce5910e-d662-4c66-a349-684b2d98509c" containerName="neutron-httpd" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.063721 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e5d971-73b4-4847-a650-4de4832ffdd6" containerName="cinder-api" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.063775 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf77eff7-55fd-45ad-8e28-7c437167cc0b" containerName="mariadb-account-create-update" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.064758 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.073492 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.073785 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.094330 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7995cbf6b7-gc2m4"] Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.105272 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-internal-tls-certs\") pod \"heat-api-7995cbf6b7-gc2m4\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.105409 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-public-tls-certs\") pod \"heat-api-7995cbf6b7-gc2m4\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.105550 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-config-data-custom\") pod \"heat-api-7995cbf6b7-gc2m4\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.105571 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brb42\" (UniqueName: \"kubernetes.io/projected/f6e56f12-62cd-469a-a48a-0319680955f5-kube-api-access-brb42\") pod \"heat-api-7995cbf6b7-gc2m4\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.105602 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-combined-ca-bundle\") pod \"heat-api-7995cbf6b7-gc2m4\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.105669 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-config-data\") pod \"heat-api-7995cbf6b7-gc2m4\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.105774 4718 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ce5910e-d662-4c66-a349-684b2d98509c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.105835 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-65d6464c6f-swdhs"] Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.107494 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.110712 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.110894 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.126562 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65d6464c6f-swdhs"] Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.209230 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-combined-ca-bundle\") pod \"heat-cfnapi-65d6464c6f-swdhs\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.209358 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-public-tls-certs\") pod \"heat-api-7995cbf6b7-gc2m4\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.209517 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmkfd\" (UniqueName: \"kubernetes.io/projected/a7cf1cff-e480-43e4-b8df-c0b44812baab-kube-api-access-vmkfd\") pod \"heat-cfnapi-65d6464c6f-swdhs\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.209661 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-config-data-custom\") pod \"heat-api-7995cbf6b7-gc2m4\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.209698 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brb42\" (UniqueName: \"kubernetes.io/projected/f6e56f12-62cd-469a-a48a-0319680955f5-kube-api-access-brb42\") pod \"heat-api-7995cbf6b7-gc2m4\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.209766 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-internal-tls-certs\") pod \"heat-cfnapi-65d6464c6f-swdhs\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.210529 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-combined-ca-bundle\") pod \"heat-api-7995cbf6b7-gc2m4\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.211331 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-public-tls-certs\") pod \"heat-cfnapi-65d6464c6f-swdhs\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.211376 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-config-data\") pod \"heat-api-7995cbf6b7-gc2m4\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.211429 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-config-data\") pod \"heat-cfnapi-65d6464c6f-swdhs\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.211505 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-internal-tls-certs\") pod \"heat-api-7995cbf6b7-gc2m4\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.211616 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-config-data-custom\") pod \"heat-cfnapi-65d6464c6f-swdhs\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.215383 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-internal-tls-certs\") pod \"heat-api-7995cbf6b7-gc2m4\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.215515 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-combined-ca-bundle\") pod \"heat-api-7995cbf6b7-gc2m4\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.215696 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-config-data-custom\") pod \"heat-api-7995cbf6b7-gc2m4\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.216766 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-config-data\") pod \"heat-api-7995cbf6b7-gc2m4\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.218212 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-public-tls-certs\") pod \"heat-api-7995cbf6b7-gc2m4\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.226562 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brb42\" (UniqueName: \"kubernetes.io/projected/f6e56f12-62cd-469a-a48a-0319680955f5-kube-api-access-brb42\") pod \"heat-api-7995cbf6b7-gc2m4\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.265036 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.269345 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7dc8889c5c-4lxb7" event={"ID":"0afc8879-60d8-4d63-9784-e408e7e46ec8","Type":"ContainerStarted","Data":"f9b0f7a917bdbcdf8b4b3a8419b082faeb7500ad76d7582caf3e28dd3744e5a1"} Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.281452 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79d7cd7f96-8w9s4" event={"ID":"5ce5910e-d662-4c66-a349-684b2d98509c","Type":"ContainerDied","Data":"5d8c0dbc7aabe3df91ed1665d0c3f52215dad4fbafb47f7da9dbdcbb55252684"} Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.281529 4718 scope.go:117] "RemoveContainer" containerID="861b8bc546231af488bdfaa720f2977ae98838f84ff3b3e72bc1366bc74f1e5a" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.281473 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79d7cd7f96-8w9s4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.285524 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-844bdc55c5-4kc9t" event={"ID":"d1f0df76-9570-4514-8a62-3986fb03976b","Type":"ContainerStarted","Data":"65894d77fc210a80d2a300974a8884e3dd56bc689e2fe94a76a68d94b734a4e1"} Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.317764 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-public-tls-certs\") pod \"heat-cfnapi-65d6464c6f-swdhs\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.317860 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-config-data\") pod \"heat-cfnapi-65d6464c6f-swdhs\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.318021 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-config-data-custom\") pod \"heat-cfnapi-65d6464c6f-swdhs\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.318069 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-combined-ca-bundle\") pod \"heat-cfnapi-65d6464c6f-swdhs\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.318128 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmkfd\" (UniqueName: \"kubernetes.io/projected/a7cf1cff-e480-43e4-b8df-c0b44812baab-kube-api-access-vmkfd\") pod \"heat-cfnapi-65d6464c6f-swdhs\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.318166 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-internal-tls-certs\") pod \"heat-cfnapi-65d6464c6f-swdhs\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.329128 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-internal-tls-certs\") pod \"heat-cfnapi-65d6464c6f-swdhs\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.359694 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-public-tls-certs\") pod \"heat-cfnapi-65d6464c6f-swdhs\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.360522 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-config-data-custom\") pod \"heat-cfnapi-65d6464c6f-swdhs\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.362611 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-combined-ca-bundle\") pod \"heat-cfnapi-65d6464c6f-swdhs\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.371210 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmkfd\" (UniqueName: \"kubernetes.io/projected/a7cf1cff-e480-43e4-b8df-c0b44812baab-kube-api-access-vmkfd\") pod \"heat-cfnapi-65d6464c6f-swdhs\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.398750 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-config-data\") pod \"heat-cfnapi-65d6464c6f-swdhs\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.407161 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.424097 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-7tbwv"] Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.439886 4718 scope.go:117] "RemoveContainer" containerID="208e15d0e6f071083c884987d678627e340c6690645f9772f8c5aef7aa6fc9d3" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.443012 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.458081 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-567d9cb85d-xx4vp"] Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.494616 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 16:38:56 crc kubenswrapper[4718]: E0123 16:38:56.540533 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29e5d971_73b4_4847_a650_4de4832ffdd6.slice/crio-942c7fd6ce679a7e62ec996a662aa38d6be37e10151cab9fdd03ea4040ce1148\": RecentStats: unable to find data in memory cache]" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.676266 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.704924 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7b6b6b4c97-tcz2k"] Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.720075 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6cb6c6d8c4-hwj8v"] Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.734320 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.748334 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.750470 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.754558 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.754874 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.755041 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.795226 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.795525 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" containerName="glance-log" containerID="cri-o://c835b0e0608bfeabe82e8bde61b1e8328bdbbff6fb51fd300ae25efdc40fdbb5" gracePeriod=30 Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.795866 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" containerName="glance-httpd" containerID="cri-o://aef28a08b2106dce09891bf581414767268366f60e5b60a0a10499e6de041f0c" gracePeriod=30 Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.818143 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.829379 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7cd7745668-858qv"] Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.838780 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-79d7cd7f96-8w9s4"] Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.854950 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-79d7cd7f96-8w9s4"] Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.894936 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v5qz\" (UniqueName: \"kubernetes.io/projected/9fdab71e-08c8-4269-a9dd-69b152751e4d-kube-api-access-8v5qz\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.895191 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fdab71e-08c8-4269-a9dd-69b152751e4d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.895267 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fdab71e-08c8-4269-a9dd-69b152751e4d-config-data-custom\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.895338 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fdab71e-08c8-4269-a9dd-69b152751e4d-logs\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.895412 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fdab71e-08c8-4269-a9dd-69b152751e4d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.895493 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fdab71e-08c8-4269-a9dd-69b152751e4d-config-data\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.895660 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fdab71e-08c8-4269-a9dd-69b152751e4d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.895789 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fdab71e-08c8-4269-a9dd-69b152751e4d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:56 crc kubenswrapper[4718]: I0123 16:38:56.895874 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fdab71e-08c8-4269-a9dd-69b152751e4d-scripts\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.002744 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v5qz\" (UniqueName: \"kubernetes.io/projected/9fdab71e-08c8-4269-a9dd-69b152751e4d-kube-api-access-8v5qz\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.002788 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fdab71e-08c8-4269-a9dd-69b152751e4d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.002807 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fdab71e-08c8-4269-a9dd-69b152751e4d-config-data-custom\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.002825 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fdab71e-08c8-4269-a9dd-69b152751e4d-logs\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.002847 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fdab71e-08c8-4269-a9dd-69b152751e4d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.002876 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fdab71e-08c8-4269-a9dd-69b152751e4d-config-data\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.002951 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fdab71e-08c8-4269-a9dd-69b152751e4d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.002984 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fdab71e-08c8-4269-a9dd-69b152751e4d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.003008 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fdab71e-08c8-4269-a9dd-69b152751e4d-scripts\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.008757 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fdab71e-08c8-4269-a9dd-69b152751e4d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.015283 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fdab71e-08c8-4269-a9dd-69b152751e4d-logs\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.033944 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fdab71e-08c8-4269-a9dd-69b152751e4d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.055190 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fdab71e-08c8-4269-a9dd-69b152751e4d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.055432 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fdab71e-08c8-4269-a9dd-69b152751e4d-scripts\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.056382 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fdab71e-08c8-4269-a9dd-69b152751e4d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.064496 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fdab71e-08c8-4269-a9dd-69b152751e4d-config-data-custom\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.070342 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v5qz\" (UniqueName: \"kubernetes.io/projected/9fdab71e-08c8-4269-a9dd-69b152751e4d-kube-api-access-8v5qz\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.071570 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fdab71e-08c8-4269-a9dd-69b152751e4d-config-data\") pod \"cinder-api-0\" (UID: \"9fdab71e-08c8-4269-a9dd-69b152751e4d\") " pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.091313 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.221166 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29e5d971-73b4-4847-a650-4de4832ffdd6" path="/var/lib/kubelet/pods/29e5d971-73b4-4847-a650-4de4832ffdd6/volumes" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.256737 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ce5910e-d662-4c66-a349-684b2d98509c" path="/var/lib/kubelet/pods/5ce5910e-d662-4c66-a349-684b2d98509c/volumes" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.358164 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7cd7745668-858qv" event={"ID":"0e281000-f117-4ab6-8a56-a741d57ac660","Type":"ContainerStarted","Data":"8b471805e66435a7e919165c64127f7348149974d78d515d5fee7f7117b9ca68"} Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.360071 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7cd7745668-858qv" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.374825 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" event={"ID":"945685c3-965d-45a3-b1dc-f1fea0a489dc","Type":"ContainerStarted","Data":"dc268aa35dcdf03db3a201ee9894f3171f7018f7af726e248b7e2d6e709e79db"} Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.379948 4718 generic.go:334] "Generic (PLEG): container finished" podID="d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" containerID="c835b0e0608bfeabe82e8bde61b1e8328bdbbff6fb51fd300ae25efdc40fdbb5" exitCode=143 Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.380022 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e","Type":"ContainerDied","Data":"c835b0e0608bfeabe82e8bde61b1e8328bdbbff6fb51fd300ae25efdc40fdbb5"} Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.385008 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b6b6b4c97-tcz2k" event={"ID":"0928142a-4c90-4e86-9e29-8e7abb282cf0","Type":"ContainerStarted","Data":"740cc86e9d97175ad2666e4f2a6a09ac0bbe3b0bd533954fa980f9ab9d240731"} Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.407073 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7cd7745668-858qv" podStartSLOduration=11.40704962 podStartE2EDuration="11.40704962s" podCreationTimestamp="2026-01-23 16:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:38:57.3948873 +0000 UTC m=+1338.542129281" watchObservedRunningTime="2026-01-23 16:38:57.40704962 +0000 UTC m=+1338.554291611" Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.441313 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" event={"ID":"817285ad-e89c-4123-b42a-b622631062cd","Type":"ContainerStarted","Data":"713e38400b3d571ffa91032259d4cae018be7135490ec1afd5ef87f3231d673b"} Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.459754 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" event={"ID":"3ae9ff45-f144-444f-b736-40cc69a7bda0","Type":"ContainerStarted","Data":"dd2fe203780c4c4ce07519172902e25642ce9b170bb130bb8a5c30f0a6889cc4"} Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.473871 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22073987-e907-4f8b-95a3-bf9534ce1a38","Type":"ContainerStarted","Data":"5913e3d36def17b3cd5d19b1bac3c511a17b26441a4e83f8b49399c1db6f2f3f"} Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.520530 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65d6464c6f-swdhs"] Jan 23 16:38:57 crc kubenswrapper[4718]: I0123 16:38:57.762976 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7995cbf6b7-gc2m4"] Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.042948 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.578113 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fdab71e-08c8-4269-a9dd-69b152751e4d","Type":"ContainerStarted","Data":"3ed7fb972b6a6942c81dca5dc2239643c64e4adb8d7bdf0f1af1a529187a521b"} Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.633218 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65d6464c6f-swdhs" event={"ID":"a7cf1cff-e480-43e4-b8df-c0b44812baab","Type":"ContainerStarted","Data":"cea1fb2a450d3c3f9d85fce1f51d4694d35aca22bf03c591390576341d7adf5a"} Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.667096 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-bthbv"] Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.668983 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bthbv" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.670316 4718 generic.go:334] "Generic (PLEG): container finished" podID="3ae9ff45-f144-444f-b736-40cc69a7bda0" containerID="aa4c11d74afd8ed9452f8e5ce6e954db7506ce78803bb3f8b30827ce21f8d972" exitCode=0 Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.671343 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" event={"ID":"3ae9ff45-f144-444f-b736-40cc69a7bda0","Type":"ContainerStarted","Data":"7454ef626a552ccffcd797cb590e1657c5d4d57d46c48759899a4922da7ab185"} Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.671403 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" event={"ID":"3ae9ff45-f144-444f-b736-40cc69a7bda0","Type":"ContainerDied","Data":"aa4c11d74afd8ed9452f8e5ce6e954db7506ce78803bb3f8b30827ce21f8d972"} Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.672528 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.721908 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bthbv"] Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.724324 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22073987-e907-4f8b-95a3-bf9534ce1a38","Type":"ContainerStarted","Data":"bf51ba6bf121527004bc681253120b03016f9cc43c2001cc5981221c611e1486"} Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.762875 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09-operator-scripts\") pod \"nova-api-db-create-bthbv\" (UID: \"0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09\") " pod="openstack/nova-api-db-create-bthbv" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.762994 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9j6n\" (UniqueName: \"kubernetes.io/projected/0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09-kube-api-access-q9j6n\") pod \"nova-api-db-create-bthbv\" (UID: \"0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09\") " pod="openstack/nova-api-db-create-bthbv" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.773489 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7cd7745668-858qv" event={"ID":"0e281000-f117-4ab6-8a56-a741d57ac660","Type":"ContainerStarted","Data":"3da9a3a9bda8d6bdebf999d5c15de01204ff4d5262558a9677077ab2a2d080b2"} Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.790015 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b6b6b4c97-tcz2k" event={"ID":"0928142a-4c90-4e86-9e29-8e7abb282cf0","Type":"ContainerStarted","Data":"7f4f5852b93dc917c318f9951829c4ca61d328d1da46b2fea64c7fbf55d262ce"} Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.790086 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-8rn8k"] Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.793317 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7b6b6b4c97-tcz2k" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.793370 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8rn8k" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.830582 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7995cbf6b7-gc2m4" event={"ID":"f6e56f12-62cd-469a-a48a-0319680955f5","Type":"ContainerStarted","Data":"d4bc62a26a9900c12b07758bebd4999e28f01e67aaea54f3f1164fa296d995f7"} Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.837853 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8rn8k"] Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.864044 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" podStartSLOduration=12.864023499 podStartE2EDuration="12.864023499s" podCreationTimestamp="2026-01-23 16:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:38:58.784369387 +0000 UTC m=+1339.931611378" watchObservedRunningTime="2026-01-23 16:38:58.864023499 +0000 UTC m=+1340.011265480" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.866287 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qhm2\" (UniqueName: \"kubernetes.io/projected/328feadc-ef28-4714-acfc-a10826fb69f6-kube-api-access-4qhm2\") pod \"nova-cell0-db-create-8rn8k\" (UID: \"328feadc-ef28-4714-acfc-a10826fb69f6\") " pod="openstack/nova-cell0-db-create-8rn8k" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.866466 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09-operator-scripts\") pod \"nova-api-db-create-bthbv\" (UID: \"0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09\") " pod="openstack/nova-api-db-create-bthbv" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.866560 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9j6n\" (UniqueName: \"kubernetes.io/projected/0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09-kube-api-access-q9j6n\") pod \"nova-api-db-create-bthbv\" (UID: \"0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09\") " pod="openstack/nova-api-db-create-bthbv" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.866582 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328feadc-ef28-4714-acfc-a10826fb69f6-operator-scripts\") pod \"nova-cell0-db-create-8rn8k\" (UID: \"328feadc-ef28-4714-acfc-a10826fb69f6\") " pod="openstack/nova-cell0-db-create-8rn8k" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.868475 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09-operator-scripts\") pod \"nova-api-db-create-bthbv\" (UID: \"0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09\") " pod="openstack/nova-api-db-create-bthbv" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.913415 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9j6n\" (UniqueName: \"kubernetes.io/projected/0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09-kube-api-access-q9j6n\") pod \"nova-api-db-create-bthbv\" (UID: \"0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09\") " pod="openstack/nova-api-db-create-bthbv" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.921087 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9952-account-create-update-x5sd9"] Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.923305 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9952-account-create-update-x5sd9" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.926952 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.939471 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7b6b6b4c97-tcz2k" podStartSLOduration=5.939450316 podStartE2EDuration="5.939450316s" podCreationTimestamp="2026-01-23 16:38:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:38:58.827311933 +0000 UTC m=+1339.974553924" watchObservedRunningTime="2026-01-23 16:38:58.939450316 +0000 UTC m=+1340.086692297" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.970663 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328feadc-ef28-4714-acfc-a10826fb69f6-operator-scripts\") pod \"nova-cell0-db-create-8rn8k\" (UID: \"328feadc-ef28-4714-acfc-a10826fb69f6\") " pod="openstack/nova-cell0-db-create-8rn8k" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.970732 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmcpv\" (UniqueName: \"kubernetes.io/projected/c80a9db0-876a-4ba3-a5ce-018220181097-kube-api-access-wmcpv\") pod \"nova-api-9952-account-create-update-x5sd9\" (UID: \"c80a9db0-876a-4ba3-a5ce-018220181097\") " pod="openstack/nova-api-9952-account-create-update-x5sd9" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.970783 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c80a9db0-876a-4ba3-a5ce-018220181097-operator-scripts\") pod \"nova-api-9952-account-create-update-x5sd9\" (UID: \"c80a9db0-876a-4ba3-a5ce-018220181097\") " pod="openstack/nova-api-9952-account-create-update-x5sd9" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.970861 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qhm2\" (UniqueName: \"kubernetes.io/projected/328feadc-ef28-4714-acfc-a10826fb69f6-kube-api-access-4qhm2\") pod \"nova-cell0-db-create-8rn8k\" (UID: \"328feadc-ef28-4714-acfc-a10826fb69f6\") " pod="openstack/nova-cell0-db-create-8rn8k" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.976761 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328feadc-ef28-4714-acfc-a10826fb69f6-operator-scripts\") pod \"nova-cell0-db-create-8rn8k\" (UID: \"328feadc-ef28-4714-acfc-a10826fb69f6\") " pod="openstack/nova-cell0-db-create-8rn8k" Jan 23 16:38:58 crc kubenswrapper[4718]: I0123 16:38:58.987214 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9952-account-create-update-x5sd9"] Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.017278 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qhm2\" (UniqueName: \"kubernetes.io/projected/328feadc-ef28-4714-acfc-a10826fb69f6-kube-api-access-4qhm2\") pod \"nova-cell0-db-create-8rn8k\" (UID: \"328feadc-ef28-4714-acfc-a10826fb69f6\") " pod="openstack/nova-cell0-db-create-8rn8k" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.018716 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bthbv" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.062283 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-kwhc9"] Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.064418 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kwhc9" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.081639 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmcpv\" (UniqueName: \"kubernetes.io/projected/c80a9db0-876a-4ba3-a5ce-018220181097-kube-api-access-wmcpv\") pod \"nova-api-9952-account-create-update-x5sd9\" (UID: \"c80a9db0-876a-4ba3-a5ce-018220181097\") " pod="openstack/nova-api-9952-account-create-update-x5sd9" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.087150 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c80a9db0-876a-4ba3-a5ce-018220181097-operator-scripts\") pod \"nova-api-9952-account-create-update-x5sd9\" (UID: \"c80a9db0-876a-4ba3-a5ce-018220181097\") " pod="openstack/nova-api-9952-account-create-update-x5sd9" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.088328 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c80a9db0-876a-4ba3-a5ce-018220181097-operator-scripts\") pod \"nova-api-9952-account-create-update-x5sd9\" (UID: \"c80a9db0-876a-4ba3-a5ce-018220181097\") " pod="openstack/nova-api-9952-account-create-update-x5sd9" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.102256 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kwhc9"] Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.121292 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmcpv\" (UniqueName: \"kubernetes.io/projected/c80a9db0-876a-4ba3-a5ce-018220181097-kube-api-access-wmcpv\") pod \"nova-api-9952-account-create-update-x5sd9\" (UID: \"c80a9db0-876a-4ba3-a5ce-018220181097\") " pod="openstack/nova-api-9952-account-create-update-x5sd9" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.146713 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-573f-account-create-update-ktn8s"] Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.150514 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-573f-account-create-update-ktn8s" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.178397 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9952-account-create-update-x5sd9" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.182175 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8rn8k" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.203074 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m72r\" (UniqueName: \"kubernetes.io/projected/ab905731-cf34-4768-9cef-c35c3bed8f22-kube-api-access-9m72r\") pod \"nova-cell0-573f-account-create-update-ktn8s\" (UID: \"ab905731-cf34-4768-9cef-c35c3bed8f22\") " pod="openstack/nova-cell0-573f-account-create-update-ktn8s" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.206415 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77678c25-65e5-449f-b905-c8732eb45518-operator-scripts\") pod \"nova-cell1-db-create-kwhc9\" (UID: \"77678c25-65e5-449f-b905-c8732eb45518\") " pod="openstack/nova-cell1-db-create-kwhc9" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.206475 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab905731-cf34-4768-9cef-c35c3bed8f22-operator-scripts\") pod \"nova-cell0-573f-account-create-update-ktn8s\" (UID: \"ab905731-cf34-4768-9cef-c35c3bed8f22\") " pod="openstack/nova-cell0-573f-account-create-update-ktn8s" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.206675 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsbnk\" (UniqueName: \"kubernetes.io/projected/77678c25-65e5-449f-b905-c8732eb45518-kube-api-access-tsbnk\") pod \"nova-cell1-db-create-kwhc9\" (UID: \"77678c25-65e5-449f-b905-c8732eb45518\") " pod="openstack/nova-cell1-db-create-kwhc9" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.244532 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.312241 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-573f-account-create-update-ktn8s"] Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.312477 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c3f8-account-create-update-pdfm4"] Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.316019 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c3f8-account-create-update-pdfm4" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.318201 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.322048 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsbnk\" (UniqueName: \"kubernetes.io/projected/77678c25-65e5-449f-b905-c8732eb45518-kube-api-access-tsbnk\") pod \"nova-cell1-db-create-kwhc9\" (UID: \"77678c25-65e5-449f-b905-c8732eb45518\") " pod="openstack/nova-cell1-db-create-kwhc9" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.322930 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m72r\" (UniqueName: \"kubernetes.io/projected/ab905731-cf34-4768-9cef-c35c3bed8f22-kube-api-access-9m72r\") pod \"nova-cell0-573f-account-create-update-ktn8s\" (UID: \"ab905731-cf34-4768-9cef-c35c3bed8f22\") " pod="openstack/nova-cell0-573f-account-create-update-ktn8s" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.323367 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab905731-cf34-4768-9cef-c35c3bed8f22-operator-scripts\") pod \"nova-cell0-573f-account-create-update-ktn8s\" (UID: \"ab905731-cf34-4768-9cef-c35c3bed8f22\") " pod="openstack/nova-cell0-573f-account-create-update-ktn8s" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.323404 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77678c25-65e5-449f-b905-c8732eb45518-operator-scripts\") pod \"nova-cell1-db-create-kwhc9\" (UID: \"77678c25-65e5-449f-b905-c8732eb45518\") " pod="openstack/nova-cell1-db-create-kwhc9" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.324382 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77678c25-65e5-449f-b905-c8732eb45518-operator-scripts\") pod \"nova-cell1-db-create-kwhc9\" (UID: \"77678c25-65e5-449f-b905-c8732eb45518\") " pod="openstack/nova-cell1-db-create-kwhc9" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.325093 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab905731-cf34-4768-9cef-c35c3bed8f22-operator-scripts\") pod \"nova-cell0-573f-account-create-update-ktn8s\" (UID: \"ab905731-cf34-4768-9cef-c35c3bed8f22\") " pod="openstack/nova-cell0-573f-account-create-update-ktn8s" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.354391 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c3f8-account-create-update-pdfm4"] Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.364973 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m72r\" (UniqueName: \"kubernetes.io/projected/ab905731-cf34-4768-9cef-c35c3bed8f22-kube-api-access-9m72r\") pod \"nova-cell0-573f-account-create-update-ktn8s\" (UID: \"ab905731-cf34-4768-9cef-c35c3bed8f22\") " pod="openstack/nova-cell0-573f-account-create-update-ktn8s" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.393487 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsbnk\" (UniqueName: \"kubernetes.io/projected/77678c25-65e5-449f-b905-c8732eb45518-kube-api-access-tsbnk\") pod \"nova-cell1-db-create-kwhc9\" (UID: \"77678c25-65e5-449f-b905-c8732eb45518\") " pod="openstack/nova-cell1-db-create-kwhc9" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.426710 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpv94\" (UniqueName: \"kubernetes.io/projected/d454d95a-f920-46da-9b19-70d3b987d808-kube-api-access-jpv94\") pod \"nova-cell1-c3f8-account-create-update-pdfm4\" (UID: \"d454d95a-f920-46da-9b19-70d3b987d808\") " pod="openstack/nova-cell1-c3f8-account-create-update-pdfm4" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.426899 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d454d95a-f920-46da-9b19-70d3b987d808-operator-scripts\") pod \"nova-cell1-c3f8-account-create-update-pdfm4\" (UID: \"d454d95a-f920-46da-9b19-70d3b987d808\") " pod="openstack/nova-cell1-c3f8-account-create-update-pdfm4" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.465841 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-573f-account-create-update-ktn8s" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.541830 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d454d95a-f920-46da-9b19-70d3b987d808-operator-scripts\") pod \"nova-cell1-c3f8-account-create-update-pdfm4\" (UID: \"d454d95a-f920-46da-9b19-70d3b987d808\") " pod="openstack/nova-cell1-c3f8-account-create-update-pdfm4" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.543092 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpv94\" (UniqueName: \"kubernetes.io/projected/d454d95a-f920-46da-9b19-70d3b987d808-kube-api-access-jpv94\") pod \"nova-cell1-c3f8-account-create-update-pdfm4\" (UID: \"d454d95a-f920-46da-9b19-70d3b987d808\") " pod="openstack/nova-cell1-c3f8-account-create-update-pdfm4" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.543940 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d454d95a-f920-46da-9b19-70d3b987d808-operator-scripts\") pod \"nova-cell1-c3f8-account-create-update-pdfm4\" (UID: \"d454d95a-f920-46da-9b19-70d3b987d808\") " pod="openstack/nova-cell1-c3f8-account-create-update-pdfm4" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.604469 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpv94\" (UniqueName: \"kubernetes.io/projected/d454d95a-f920-46da-9b19-70d3b987d808-kube-api-access-jpv94\") pod \"nova-cell1-c3f8-account-create-update-pdfm4\" (UID: \"d454d95a-f920-46da-9b19-70d3b987d808\") " pod="openstack/nova-cell1-c3f8-account-create-update-pdfm4" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.619817 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kwhc9" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.798807 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c3f8-account-create-update-pdfm4" Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.930624 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bthbv"] Jan 23 16:38:59 crc kubenswrapper[4718]: I0123 16:38:59.933807 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fdab71e-08c8-4269-a9dd-69b152751e4d","Type":"ContainerStarted","Data":"447b331db5dcc9426e2f3f5868c2feea639044502567db7531cd9b2e61fc8d8d"} Jan 23 16:39:00 crc kubenswrapper[4718]: I0123 16:39:00.163691 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 16:39:00 crc kubenswrapper[4718]: I0123 16:39:00.168145 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b3f14be9-3ab8-4e54-852d-82a373d11028" containerName="glance-httpd" containerID="cri-o://ab09c8bb2b1ab814b21185f83d61eb76e4b8bd0e37fb09f239e6729b3f99f6df" gracePeriod=30 Jan 23 16:39:00 crc kubenswrapper[4718]: I0123 16:39:00.168115 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b3f14be9-3ab8-4e54-852d-82a373d11028" containerName="glance-log" containerID="cri-o://c7d9778bd38f9b1dd30e6e5fc91959033ff5becb7bacc4bbe4a8f9af2ae57aa7" gracePeriod=30 Jan 23 16:39:00 crc kubenswrapper[4718]: I0123 16:39:00.219431 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8rn8k"] Jan 23 16:39:00 crc kubenswrapper[4718]: I0123 16:39:00.243240 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9952-account-create-update-x5sd9"] Jan 23 16:39:00 crc kubenswrapper[4718]: I0123 16:39:00.962126 4718 generic.go:334] "Generic (PLEG): container finished" podID="b3f14be9-3ab8-4e54-852d-82a373d11028" containerID="c7d9778bd38f9b1dd30e6e5fc91959033ff5becb7bacc4bbe4a8f9af2ae57aa7" exitCode=143 Jan 23 16:39:00 crc kubenswrapper[4718]: I0123 16:39:00.962240 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3f14be9-3ab8-4e54-852d-82a373d11028","Type":"ContainerDied","Data":"c7d9778bd38f9b1dd30e6e5fc91959033ff5becb7bacc4bbe4a8f9af2ae57aa7"} Jan 23 16:39:00 crc kubenswrapper[4718]: I0123 16:39:00.969953 4718 generic.go:334] "Generic (PLEG): container finished" podID="d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" containerID="aef28a08b2106dce09891bf581414767268366f60e5b60a0a10499e6de041f0c" exitCode=0 Jan 23 16:39:00 crc kubenswrapper[4718]: I0123 16:39:00.970052 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e","Type":"ContainerDied","Data":"aef28a08b2106dce09891bf581414767268366f60e5b60a0a10499e6de041f0c"} Jan 23 16:39:01 crc kubenswrapper[4718]: I0123 16:39:01.869934 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-573f-account-create-update-ktn8s"] Jan 23 16:39:01 crc kubenswrapper[4718]: I0123 16:39:01.990915 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9952-account-create-update-x5sd9" event={"ID":"c80a9db0-876a-4ba3-a5ce-018220181097","Type":"ContainerStarted","Data":"aa83c890718647e0eb13655a70ce953dfb003ba7592c8f897e6d4c57c037abb1"} Jan 23 16:39:01 crc kubenswrapper[4718]: I0123 16:39:01.999016 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8rn8k" event={"ID":"328feadc-ef28-4714-acfc-a10826fb69f6","Type":"ContainerStarted","Data":"904fecf9512207ee79a5c62d7879e639e058478d6c05bcf8c8f6cac4220e4be8"} Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.170897 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.250720 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-w4hbk"] Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.251010 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" podUID="b2153bde-84f6-45c0-9b35-e7b4943cbcee" containerName="dnsmasq-dns" containerID="cri-o://2d5fbf3a343b8f4233eb653866d1e5dc200f2cbdeb4aa20acae03845c6e095ed" gracePeriod=10 Jan 23 16:39:02 crc kubenswrapper[4718]: W0123 16:39:02.430121 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab905731_cf34_4768_9cef_c35c3bed8f22.slice/crio-811277b89e7bddd1b2518687fe0e120f0e7efe139676d148de0d6ebf9dea2e1c WatchSource:0}: Error finding container 811277b89e7bddd1b2518687fe0e120f0e7efe139676d148de0d6ebf9dea2e1c: Status 404 returned error can't find the container with id 811277b89e7bddd1b2518687fe0e120f0e7efe139676d148de0d6ebf9dea2e1c Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.542311 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.638233 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-logs\") pod \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.638780 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-public-tls-certs\") pod \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.638844 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-config-data\") pod \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.639034 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\") pod \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.639078 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-combined-ca-bundle\") pod \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.639136 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhbln\" (UniqueName: \"kubernetes.io/projected/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-kube-api-access-nhbln\") pod \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.639185 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-httpd-run\") pod \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.639221 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-scripts\") pod \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\" (UID: \"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e\") " Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.651467 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-logs" (OuterVolumeSpecName: "logs") pod "d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" (UID: "d6ff37b7-c913-4daa-ad68-a8b7a5feba7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.670849 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" (UID: "d6ff37b7-c913-4daa-ad68-a8b7a5feba7e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.678063 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-kube-api-access-nhbln" (OuterVolumeSpecName: "kube-api-access-nhbln") pod "d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" (UID: "d6ff37b7-c913-4daa-ad68-a8b7a5feba7e"). InnerVolumeSpecName "kube-api-access-nhbln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.706474 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-scripts" (OuterVolumeSpecName: "scripts") pod "d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" (UID: "d6ff37b7-c913-4daa-ad68-a8b7a5feba7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.715265 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" (UID: "d6ff37b7-c913-4daa-ad68-a8b7a5feba7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.754548 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-logs\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.754572 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.754584 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhbln\" (UniqueName: \"kubernetes.io/projected/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-kube-api-access-nhbln\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.754592 4718 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.754602 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.814852 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-config-data" (OuterVolumeSpecName: "config-data") pod "d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" (UID: "d6ff37b7-c913-4daa-ad68-a8b7a5feba7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.840600 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f" (OuterVolumeSpecName: "glance") pod "d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" (UID: "d6ff37b7-c913-4daa-ad68-a8b7a5feba7e"). InnerVolumeSpecName "pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.875922 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.875978 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\") on node \"crc\" " Jan 23 16:39:02 crc kubenswrapper[4718]: I0123 16:39:02.965340 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" (UID: "d6ff37b7-c913-4daa-ad68-a8b7a5feba7e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.046752 4718 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.047984 4718 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f") on node "crc" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.096954 4718 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.097022 4718 reconciler_common.go:293] "Volume detached for volume \"pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.110913 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bthbv" event={"ID":"0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09","Type":"ContainerStarted","Data":"3ef42f0bfb89940f7b394dd19d588e1f96932fea8e39bacbbeb61563816802b8"} Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.114611 4718 generic.go:334] "Generic (PLEG): container finished" podID="b2153bde-84f6-45c0-9b35-e7b4943cbcee" containerID="2d5fbf3a343b8f4233eb653866d1e5dc200f2cbdeb4aa20acae03845c6e095ed" exitCode=0 Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.114716 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" event={"ID":"b2153bde-84f6-45c0-9b35-e7b4943cbcee","Type":"ContainerDied","Data":"2d5fbf3a343b8f4233eb653866d1e5dc200f2cbdeb4aa20acae03845c6e095ed"} Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.116272 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d6ff37b7-c913-4daa-ad68-a8b7a5feba7e","Type":"ContainerDied","Data":"342a68aa103d9a8034134c2134bb38b9a8df9588475b30d339f06607c8b8accb"} Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.116311 4718 scope.go:117] "RemoveContainer" containerID="aef28a08b2106dce09891bf581414767268366f60e5b60a0a10499e6de041f0c" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.116588 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.137654 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-573f-account-create-update-ktn8s" event={"ID":"ab905731-cf34-4768-9cef-c35c3bed8f22","Type":"ContainerStarted","Data":"811277b89e7bddd1b2518687fe0e120f0e7efe139676d148de0d6ebf9dea2e1c"} Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.283687 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.368229 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.413671 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 16:39:03 crc kubenswrapper[4718]: E0123 16:39:03.414344 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" containerName="glance-log" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.414366 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" containerName="glance-log" Jan 23 16:39:03 crc kubenswrapper[4718]: E0123 16:39:03.414411 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" containerName="glance-httpd" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.414420 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" containerName="glance-httpd" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.414670 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" containerName="glance-log" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.414702 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" containerName="glance-httpd" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.420033 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.427053 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.432021 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.468977 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.531650 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3948da45-04b4-4a32-b5d5-0701d87095a7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.531801 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3948da45-04b4-4a32-b5d5-0701d87095a7-scripts\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.531837 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3948da45-04b4-4a32-b5d5-0701d87095a7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.531884 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt4px\" (UniqueName: \"kubernetes.io/projected/3948da45-04b4-4a32-b5d5-0701d87095a7-kube-api-access-kt4px\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.531908 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3948da45-04b4-4a32-b5d5-0701d87095a7-logs\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.531979 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3948da45-04b4-4a32-b5d5-0701d87095a7-config-data\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.532009 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.532032 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3948da45-04b4-4a32-b5d5-0701d87095a7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.557474 4718 scope.go:117] "RemoveContainer" containerID="c835b0e0608bfeabe82e8bde61b1e8328bdbbff6fb51fd300ae25efdc40fdbb5" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.573248 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c3f8-account-create-update-pdfm4"] Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.636387 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3948da45-04b4-4a32-b5d5-0701d87095a7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.636674 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3948da45-04b4-4a32-b5d5-0701d87095a7-scripts\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.636861 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3948da45-04b4-4a32-b5d5-0701d87095a7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.637060 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt4px\" (UniqueName: \"kubernetes.io/projected/3948da45-04b4-4a32-b5d5-0701d87095a7-kube-api-access-kt4px\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.637125 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3948da45-04b4-4a32-b5d5-0701d87095a7-logs\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.637244 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3948da45-04b4-4a32-b5d5-0701d87095a7-config-data\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.637288 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.637311 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3948da45-04b4-4a32-b5d5-0701d87095a7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.639975 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3948da45-04b4-4a32-b5d5-0701d87095a7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.640895 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3948da45-04b4-4a32-b5d5-0701d87095a7-logs\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.645714 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.646493 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/41586b6fcb1f91b026169314f421fada9e29acb3bb28133ae98a72e7e358c6a3/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.654397 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3948da45-04b4-4a32-b5d5-0701d87095a7-scripts\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.658743 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3948da45-04b4-4a32-b5d5-0701d87095a7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.668088 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3948da45-04b4-4a32-b5d5-0701d87095a7-config-data\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.669261 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3948da45-04b4-4a32-b5d5-0701d87095a7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.687126 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt4px\" (UniqueName: \"kubernetes.io/projected/3948da45-04b4-4a32-b5d5-0701d87095a7-kube-api-access-kt4px\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.850192 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kwhc9"] Jan 23 16:39:03 crc kubenswrapper[4718]: I0123 16:39:03.902689 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d38d059-aa80-41f0-a9b5-afb1a4651b9f\") pod \"glance-default-external-api-0\" (UID: \"3948da45-04b4-4a32-b5d5-0701d87095a7\") " pod="openstack/glance-default-external-api-0" Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.244772 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.294440 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.449696 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-dns-svc\") pod \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.449813 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jkf9\" (UniqueName: \"kubernetes.io/projected/b2153bde-84f6-45c0-9b35-e7b4943cbcee-kube-api-access-8jkf9\") pod \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.450146 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-ovsdbserver-sb\") pod \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.450270 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-config\") pod \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.450907 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-dns-swift-storage-0\") pod \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.451081 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-ovsdbserver-nb\") pod \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\" (UID: \"b2153bde-84f6-45c0-9b35-e7b4943cbcee\") " Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.487250 4718 generic.go:334] "Generic (PLEG): container finished" podID="b3f14be9-3ab8-4e54-852d-82a373d11028" containerID="ab09c8bb2b1ab814b21185f83d61eb76e4b8bd0e37fb09f239e6729b3f99f6df" exitCode=0 Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.487871 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3f14be9-3ab8-4e54-852d-82a373d11028","Type":"ContainerDied","Data":"ab09c8bb2b1ab814b21185f83d61eb76e4b8bd0e37fb09f239e6729b3f99f6df"} Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.525374 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bthbv" event={"ID":"0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09","Type":"ContainerStarted","Data":"f0f3c365ac1b8f78324b14087daa98f29f15728d5122e41c8ca5fc387de4da59"} Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.554049 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" event={"ID":"b2153bde-84f6-45c0-9b35-e7b4943cbcee","Type":"ContainerDied","Data":"4a64815e27dd895d98b6c353dca2dd9c41cb450524ee9961278eda0c42157463"} Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.554237 4718 scope.go:117] "RemoveContainer" containerID="2d5fbf3a343b8f4233eb653866d1e5dc200f2cbdeb4aa20acae03845c6e095ed" Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.554064 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.560081 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kwhc9" event={"ID":"77678c25-65e5-449f-b905-c8732eb45518","Type":"ContainerStarted","Data":"de87eefcaa3b30ea5284f61d7a3c4c4dffeb0eff27d353e488f5f815e7d15a7e"} Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.584018 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2153bde-84f6-45c0-9b35-e7b4943cbcee-kube-api-access-8jkf9" (OuterVolumeSpecName: "kube-api-access-8jkf9") pod "b2153bde-84f6-45c0-9b35-e7b4943cbcee" (UID: "b2153bde-84f6-45c0-9b35-e7b4943cbcee"). InnerVolumeSpecName "kube-api-access-8jkf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.594231 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8rn8k" event={"ID":"328feadc-ef28-4714-acfc-a10826fb69f6","Type":"ContainerStarted","Data":"5f7e29dba2fd77eff5cfd625679869ea9a88ea76ee7468bd11a247455e817879"} Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.612869 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c3f8-account-create-update-pdfm4" event={"ID":"d454d95a-f920-46da-9b19-70d3b987d808","Type":"ContainerStarted","Data":"be0144fb94fbd14f9b37ee2f377bf00202097e88e3c10f4c1c1ae5137590fc43"} Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.624359 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-bthbv" podStartSLOduration=6.624336437 podStartE2EDuration="6.624336437s" podCreationTimestamp="2026-01-23 16:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:39:04.592956356 +0000 UTC m=+1345.740198347" watchObservedRunningTime="2026-01-23 16:39:04.624336437 +0000 UTC m=+1345.771578428" Jan 23 16:39:04 crc kubenswrapper[4718]: I0123 16:39:04.666266 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jkf9\" (UniqueName: \"kubernetes.io/projected/b2153bde-84f6-45c0-9b35-e7b4943cbcee-kube-api-access-8jkf9\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.169245 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6ff37b7-c913-4daa-ad68-a8b7a5feba7e" path="/var/lib/kubelet/pods/d6ff37b7-c913-4daa-ad68-a8b7a5feba7e/volumes" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.344152 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-config" (OuterVolumeSpecName: "config") pod "b2153bde-84f6-45c0-9b35-e7b4943cbcee" (UID: "b2153bde-84f6-45c0-9b35-e7b4943cbcee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.357048 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.395826 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b2153bde-84f6-45c0-9b35-e7b4943cbcee" (UID: "b2153bde-84f6-45c0-9b35-e7b4943cbcee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.420420 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b2153bde-84f6-45c0-9b35-e7b4943cbcee" (UID: "b2153bde-84f6-45c0-9b35-e7b4943cbcee"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.449549 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b2153bde-84f6-45c0-9b35-e7b4943cbcee" (UID: "b2153bde-84f6-45c0-9b35-e7b4943cbcee"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.495727 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.495766 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.495786 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.555976 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b2153bde-84f6-45c0-9b35-e7b4943cbcee" (UID: "b2153bde-84f6-45c0-9b35-e7b4943cbcee"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.609502 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2153bde-84f6-45c0-9b35-e7b4943cbcee-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.657759 4718 generic.go:334] "Generic (PLEG): container finished" podID="0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09" containerID="f0f3c365ac1b8f78324b14087daa98f29f15728d5122e41c8ca5fc387de4da59" exitCode=0 Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.670393 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-844bdc55c5-4kc9t" podStartSLOduration=5.9321023440000005 podStartE2EDuration="12.670369973s" podCreationTimestamp="2026-01-23 16:38:53 +0000 UTC" firstStartedPulling="2026-01-23 16:38:55.968771739 +0000 UTC m=+1337.116013730" lastFinishedPulling="2026-01-23 16:39:02.707039368 +0000 UTC m=+1343.854281359" observedRunningTime="2026-01-23 16:39:05.669649604 +0000 UTC m=+1346.816891655" watchObservedRunningTime="2026-01-23 16:39:05.670369973 +0000 UTC m=+1346.817611964" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.686609 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" podUID="945685c3-965d-45a3-b1dc-f1fea0a489dc" containerName="heat-cfnapi" containerID="cri-o://8cb071e23622c4626e30cc61cd43e27536a80813c4a9a9353baaadd6ca87e5fa" gracePeriod=60 Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.717581 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7dc8889c5c-4lxb7" podUID="0afc8879-60d8-4d63-9784-e408e7e46ec8" containerName="heat-api" containerID="cri-o://b469575e590a2fac46afe16fca28bf9940d84efdd7d4cfccd87a63d63835a4c6" gracePeriod=60 Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.775990 4718 scope.go:117] "RemoveContainer" containerID="3a0886b2966b22ce24c80e8dd6dc18afe1ae7176611864ce2e51ddce4717a14e" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.777048 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.777086 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3f14be9-3ab8-4e54-852d-82a373d11028","Type":"ContainerDied","Data":"caf35a4b84378fd8a4d6712924276ad7b0d5dfe34b1f9dcb16dd753972dd88b9"} Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.777136 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caf35a4b84378fd8a4d6712924276ad7b0d5dfe34b1f9dcb16dd753972dd88b9" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.777151 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.777163 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-844bdc55c5-4kc9t" event={"ID":"d1f0df76-9570-4514-8a62-3986fb03976b","Type":"ContainerStarted","Data":"5db41c001d86735ebb0cfa263207ea0f1985545adab30dcbb2b98981eb778078"} Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.777174 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.777206 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bthbv" event={"ID":"0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09","Type":"ContainerDied","Data":"f0f3c365ac1b8f78324b14087daa98f29f15728d5122e41c8ca5fc387de4da59"} Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.777222 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7dc8889c5c-4lxb7" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.777235 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" event={"ID":"945685c3-965d-45a3-b1dc-f1fea0a489dc","Type":"ContainerStarted","Data":"8cb071e23622c4626e30cc61cd43e27536a80813c4a9a9353baaadd6ca87e5fa"} Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.777245 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7dc8889c5c-4lxb7" event={"ID":"0afc8879-60d8-4d63-9784-e408e7e46ec8","Type":"ContainerStarted","Data":"b469575e590a2fac46afe16fca28bf9940d84efdd7d4cfccd87a63d63835a4c6"} Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.777901 4718 generic.go:334] "Generic (PLEG): container finished" podID="328feadc-ef28-4714-acfc-a10826fb69f6" containerID="5f7e29dba2fd77eff5cfd625679869ea9a88ea76ee7468bd11a247455e817879" exitCode=0 Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.778040 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8rn8k" event={"ID":"328feadc-ef28-4714-acfc-a10826fb69f6","Type":"ContainerDied","Data":"5f7e29dba2fd77eff5cfd625679869ea9a88ea76ee7468bd11a247455e817879"} Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.784735 4718 generic.go:334] "Generic (PLEG): container finished" podID="ab905731-cf34-4768-9cef-c35c3bed8f22" containerID="a2cea0a7c56463d1d2340a33d4783bd8bb88c0c7d0cbd8b5c0c7a6c7ff8d7048" exitCode=0 Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.784799 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-573f-account-create-update-ktn8s" event={"ID":"ab905731-cf34-4768-9cef-c35c3bed8f22","Type":"ContainerDied","Data":"a2cea0a7c56463d1d2340a33d4783bd8bb88c0c7d0cbd8b5c0c7a6c7ff8d7048"} Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.792773 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3948da45-04b4-4a32-b5d5-0701d87095a7","Type":"ContainerStarted","Data":"a8a7207b46d661c55b98e0cd63592c3a0e7356430c1bda11c204e6d1a6e5fa4e"} Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.799444 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9952-account-create-update-x5sd9" event={"ID":"c80a9db0-876a-4ba3-a5ce-018220181097","Type":"ContainerStarted","Data":"4f7363a6dee84e8b239dd8490dfbb356900138d3043c207ccdf11aa1357e39b3"} Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.839129 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" podStartSLOduration=13.429636878 podStartE2EDuration="19.839106683s" podCreationTimestamp="2026-01-23 16:38:46 +0000 UTC" firstStartedPulling="2026-01-23 16:38:56.667492481 +0000 UTC m=+1337.814734472" lastFinishedPulling="2026-01-23 16:39:03.076962286 +0000 UTC m=+1344.224204277" observedRunningTime="2026-01-23 16:39:05.745617816 +0000 UTC m=+1346.892859827" watchObservedRunningTime="2026-01-23 16:39:05.839106683 +0000 UTC m=+1346.986348674" Jan 23 16:39:05 crc kubenswrapper[4718]: I0123 16:39:05.895809 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7dc8889c5c-4lxb7" podStartSLOduration=13.16625224 podStartE2EDuration="19.895768971s" podCreationTimestamp="2026-01-23 16:38:46 +0000 UTC" firstStartedPulling="2026-01-23 16:38:55.980179878 +0000 UTC m=+1337.127421869" lastFinishedPulling="2026-01-23 16:39:02.709696619 +0000 UTC m=+1343.856938600" observedRunningTime="2026-01-23 16:39:05.782038964 +0000 UTC m=+1346.929280955" watchObservedRunningTime="2026-01-23 16:39:05.895768971 +0000 UTC m=+1347.043010952" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.027949 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.040580 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-9952-account-create-update-x5sd9" podStartSLOduration=8.040553419 podStartE2EDuration="8.040553419s" podCreationTimestamp="2026-01-23 16:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:39:05.828970748 +0000 UTC m=+1346.976212739" watchObservedRunningTime="2026-01-23 16:39:06.040553419 +0000 UTC m=+1347.187795410" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.133964 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-w4hbk"] Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.136438 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f14be9-3ab8-4e54-852d-82a373d11028-logs\") pod \"b3f14be9-3ab8-4e54-852d-82a373d11028\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.136509 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3f14be9-3ab8-4e54-852d-82a373d11028-httpd-run\") pod \"b3f14be9-3ab8-4e54-852d-82a373d11028\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.136571 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-scripts\") pod \"b3f14be9-3ab8-4e54-852d-82a373d11028\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.136595 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-internal-tls-certs\") pod \"b3f14be9-3ab8-4e54-852d-82a373d11028\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.136772 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-config-data\") pod \"b3f14be9-3ab8-4e54-852d-82a373d11028\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.136798 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kznf5\" (UniqueName: \"kubernetes.io/projected/b3f14be9-3ab8-4e54-852d-82a373d11028-kube-api-access-kznf5\") pod \"b3f14be9-3ab8-4e54-852d-82a373d11028\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.136883 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-combined-ca-bundle\") pod \"b3f14be9-3ab8-4e54-852d-82a373d11028\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.137077 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") pod \"b3f14be9-3ab8-4e54-852d-82a373d11028\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.140642 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3f14be9-3ab8-4e54-852d-82a373d11028-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b3f14be9-3ab8-4e54-852d-82a373d11028" (UID: "b3f14be9-3ab8-4e54-852d-82a373d11028"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.140964 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3f14be9-3ab8-4e54-852d-82a373d11028-logs" (OuterVolumeSpecName: "logs") pod "b3f14be9-3ab8-4e54-852d-82a373d11028" (UID: "b3f14be9-3ab8-4e54-852d-82a373d11028"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.240394 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-w4hbk"] Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.243310 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3f14be9-3ab8-4e54-852d-82a373d11028-kube-api-access-kznf5" (OuterVolumeSpecName: "kube-api-access-kznf5") pod "b3f14be9-3ab8-4e54-852d-82a373d11028" (UID: "b3f14be9-3ab8-4e54-852d-82a373d11028"). InnerVolumeSpecName "kube-api-access-kznf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.246768 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-scripts" (OuterVolumeSpecName: "scripts") pod "b3f14be9-3ab8-4e54-852d-82a373d11028" (UID: "b3f14be9-3ab8-4e54-852d-82a373d11028"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.257966 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-scripts\") pod \"b3f14be9-3ab8-4e54-852d-82a373d11028\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.258164 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kznf5\" (UniqueName: \"kubernetes.io/projected/b3f14be9-3ab8-4e54-852d-82a373d11028-kube-api-access-kznf5\") pod \"b3f14be9-3ab8-4e54-852d-82a373d11028\" (UID: \"b3f14be9-3ab8-4e54-852d-82a373d11028\") " Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.259201 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f14be9-3ab8-4e54-852d-82a373d11028-logs\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.259218 4718 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3f14be9-3ab8-4e54-852d-82a373d11028-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:06 crc kubenswrapper[4718]: W0123 16:39:06.263911 4718 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b3f14be9-3ab8-4e54-852d-82a373d11028/volumes/kubernetes.io~projected/kube-api-access-kznf5 Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.264004 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3f14be9-3ab8-4e54-852d-82a373d11028-kube-api-access-kznf5" (OuterVolumeSpecName: "kube-api-access-kznf5") pod "b3f14be9-3ab8-4e54-852d-82a373d11028" (UID: "b3f14be9-3ab8-4e54-852d-82a373d11028"). InnerVolumeSpecName "kube-api-access-kznf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:39:06 crc kubenswrapper[4718]: W0123 16:39:06.264102 4718 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b3f14be9-3ab8-4e54-852d-82a373d11028/volumes/kubernetes.io~secret/scripts Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.264113 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-scripts" (OuterVolumeSpecName: "scripts") pod "b3f14be9-3ab8-4e54-852d-82a373d11028" (UID: "b3f14be9-3ab8-4e54-852d-82a373d11028"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.363055 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.363087 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kznf5\" (UniqueName: \"kubernetes.io/projected/b3f14be9-3ab8-4e54-852d-82a373d11028-kube-api-access-kznf5\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.661109 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8" (OuterVolumeSpecName: "glance") pod "b3f14be9-3ab8-4e54-852d-82a373d11028" (UID: "b3f14be9-3ab8-4e54-852d-82a373d11028"). InnerVolumeSpecName "pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.673157 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") on node \"crc\" " Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.841940 4718 generic.go:334] "Generic (PLEG): container finished" podID="c80a9db0-876a-4ba3-a5ce-018220181097" containerID="4f7363a6dee84e8b239dd8490dfbb356900138d3043c207ccdf11aa1357e39b3" exitCode=0 Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.842288 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9952-account-create-update-x5sd9" event={"ID":"c80a9db0-876a-4ba3-a5ce-018220181097","Type":"ContainerDied","Data":"4f7363a6dee84e8b239dd8490dfbb356900138d3043c207ccdf11aa1357e39b3"} Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.847306 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fdab71e-08c8-4269-a9dd-69b152751e4d","Type":"ContainerStarted","Data":"2ab16875553595443ee394c31341f29a430996831c924e943b1c58078a13af23"} Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.848857 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.853739 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65d6464c6f-swdhs" event={"ID":"a7cf1cff-e480-43e4-b8df-c0b44812baab","Type":"ContainerStarted","Data":"d361462443f0b23545a73be88c6c0b738fd70d86cbba8ef7416e9180dc91d5b4"} Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.853814 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.861553 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" event={"ID":"817285ad-e89c-4123-b42a-b622631062cd","Type":"ContainerStarted","Data":"b77c7722db9d0362705fda26bd987db0c0e60b0d4ac66493558864009d090ad1"} Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.862733 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.877235 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8rn8k" event={"ID":"328feadc-ef28-4714-acfc-a10826fb69f6","Type":"ContainerDied","Data":"904fecf9512207ee79a5c62d7879e639e058478d6c05bcf8c8f6cac4220e4be8"} Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.877284 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="904fecf9512207ee79a5c62d7879e639e058478d6c05bcf8c8f6cac4220e4be8" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.879691 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c3f8-account-create-update-pdfm4" event={"ID":"d454d95a-f920-46da-9b19-70d3b987d808","Type":"ContainerStarted","Data":"b169b4084d813f4c4209f67c653fe48d6df29ce0761b44ec50ea030a6dda81cf"} Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.884100 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3f14be9-3ab8-4e54-852d-82a373d11028" (UID: "b3f14be9-3ab8-4e54-852d-82a373d11028"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.888000 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.897463 4718 generic.go:334] "Generic (PLEG): container finished" podID="d1f0df76-9570-4514-8a62-3986fb03976b" containerID="5db41c001d86735ebb0cfa263207ea0f1985545adab30dcbb2b98981eb778078" exitCode=1 Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.897566 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-844bdc55c5-4kc9t" event={"ID":"d1f0df76-9570-4514-8a62-3986fb03976b","Type":"ContainerDied","Data":"5db41c001d86735ebb0cfa263207ea0f1985545adab30dcbb2b98981eb778078"} Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.898507 4718 scope.go:117] "RemoveContainer" containerID="5db41c001d86735ebb0cfa263207ea0f1985545adab30dcbb2b98981eb778078" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.911315 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.940212 4718 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.940389 4718 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8") on node "crc" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.958039 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-65d6464c6f-swdhs" podStartSLOduration=5.430998499 podStartE2EDuration="10.958018507s" podCreationTimestamp="2026-01-23 16:38:56 +0000 UTC" firstStartedPulling="2026-01-23 16:38:57.521575728 +0000 UTC m=+1338.668817719" lastFinishedPulling="2026-01-23 16:39:03.048595736 +0000 UTC m=+1344.195837727" observedRunningTime="2026-01-23 16:39:06.877834121 +0000 UTC m=+1348.025076112" watchObservedRunningTime="2026-01-23 16:39:06.958018507 +0000 UTC m=+1348.105260498" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.964055 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=10.96403606 podStartE2EDuration="10.96403606s" podCreationTimestamp="2026-01-23 16:38:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:39:06.909425839 +0000 UTC m=+1348.056667840" watchObservedRunningTime="2026-01-23 16:39:06.96403606 +0000 UTC m=+1348.111278051" Jan 23 16:39:06 crc kubenswrapper[4718]: I0123 16:39:06.990815 4718 reconciler_common.go:293] "Volume detached for volume \"pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:07 crc kubenswrapper[4718]: I0123 16:39:06.994467 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b3f14be9-3ab8-4e54-852d-82a373d11028" (UID: "b3f14be9-3ab8-4e54-852d-82a373d11028"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:07 crc kubenswrapper[4718]: I0123 16:39:07.030727 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-config-data" (OuterVolumeSpecName: "config-data") pod "b3f14be9-3ab8-4e54-852d-82a373d11028" (UID: "b3f14be9-3ab8-4e54-852d-82a373d11028"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:07 crc kubenswrapper[4718]: I0123 16:39:07.060368 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-c3f8-account-create-update-pdfm4" podStartSLOduration=8.060337954 podStartE2EDuration="8.060337954s" podCreationTimestamp="2026-01-23 16:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:39:06.94413839 +0000 UTC m=+1348.091380381" watchObservedRunningTime="2026-01-23 16:39:07.060337954 +0000 UTC m=+1348.207579955" Jan 23 16:39:07 crc kubenswrapper[4718]: I0123 16:39:07.071290 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" podStartSLOduration=7.507034355 podStartE2EDuration="14.07126354s" podCreationTimestamp="2026-01-23 16:38:53 +0000 UTC" firstStartedPulling="2026-01-23 16:38:56.483332713 +0000 UTC m=+1337.630574704" lastFinishedPulling="2026-01-23 16:39:03.047561898 +0000 UTC m=+1344.194803889" observedRunningTime="2026-01-23 16:39:06.965905851 +0000 UTC m=+1348.113147842" watchObservedRunningTime="2026-01-23 16:39:07.07126354 +0000 UTC m=+1348.218505531" Jan 23 16:39:07 crc kubenswrapper[4718]: I0123 16:39:07.102384 4718 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:07 crc kubenswrapper[4718]: I0123 16:39:07.102423 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f14be9-3ab8-4e54-852d-82a373d11028-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:07 crc kubenswrapper[4718]: I0123 16:39:07.120205 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7cd7745668-858qv" Jan 23 16:39:07 crc kubenswrapper[4718]: I0123 16:39:07.188829 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2153bde-84f6-45c0-9b35-e7b4943cbcee" path="/var/lib/kubelet/pods/b2153bde-84f6-45c0-9b35-e7b4943cbcee/volumes" Jan 23 16:39:07 crc kubenswrapper[4718]: I0123 16:39:07.434412 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8rn8k" Jan 23 16:39:07 crc kubenswrapper[4718]: I0123 16:39:07.518626 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328feadc-ef28-4714-acfc-a10826fb69f6-operator-scripts\") pod \"328feadc-ef28-4714-acfc-a10826fb69f6\" (UID: \"328feadc-ef28-4714-acfc-a10826fb69f6\") " Jan 23 16:39:07 crc kubenswrapper[4718]: I0123 16:39:07.519107 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qhm2\" (UniqueName: \"kubernetes.io/projected/328feadc-ef28-4714-acfc-a10826fb69f6-kube-api-access-4qhm2\") pod \"328feadc-ef28-4714-acfc-a10826fb69f6\" (UID: \"328feadc-ef28-4714-acfc-a10826fb69f6\") " Jan 23 16:39:07 crc kubenswrapper[4718]: I0123 16:39:07.521623 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/328feadc-ef28-4714-acfc-a10826fb69f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "328feadc-ef28-4714-acfc-a10826fb69f6" (UID: "328feadc-ef28-4714-acfc-a10826fb69f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:39:07 crc kubenswrapper[4718]: I0123 16:39:07.548298 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328feadc-ef28-4714-acfc-a10826fb69f6-kube-api-access-4qhm2" (OuterVolumeSpecName: "kube-api-access-4qhm2") pod "328feadc-ef28-4714-acfc-a10826fb69f6" (UID: "328feadc-ef28-4714-acfc-a10826fb69f6"). InnerVolumeSpecName "kube-api-access-4qhm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:39:07 crc kubenswrapper[4718]: I0123 16:39:07.622994 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qhm2\" (UniqueName: \"kubernetes.io/projected/328feadc-ef28-4714-acfc-a10826fb69f6-kube-api-access-4qhm2\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:07 crc kubenswrapper[4718]: I0123 16:39:07.623475 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328feadc-ef28-4714-acfc-a10826fb69f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:07 crc kubenswrapper[4718]: I0123 16:39:07.871334 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bthbv" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.038842 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-573f-account-create-update-ktn8s" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.040457 4718 generic.go:334] "Generic (PLEG): container finished" podID="77678c25-65e5-449f-b905-c8732eb45518" containerID="d6f3f061df6fd12443d3fb9cecbc496a4a5aea0202f56938da22529b2903a4cb" exitCode=0 Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.040677 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kwhc9" event={"ID":"77678c25-65e5-449f-b905-c8732eb45518","Type":"ContainerDied","Data":"d6f3f061df6fd12443d3fb9cecbc496a4a5aea0202f56938da22529b2903a4cb"} Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.062468 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9j6n\" (UniqueName: \"kubernetes.io/projected/0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09-kube-api-access-q9j6n\") pod \"0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09\" (UID: \"0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09\") " Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.062958 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09-operator-scripts\") pod \"0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09\" (UID: \"0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09\") " Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.065584 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09" (UID: "0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.077381 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7995cbf6b7-gc2m4" event={"ID":"f6e56f12-62cd-469a-a48a-0319680955f5","Type":"ContainerStarted","Data":"7ecae85aa5d677740d04bfbb74c835105846bdce9c410fba05210ebe5f307dc7"} Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.078554 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.085774 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09-kube-api-access-q9j6n" (OuterVolumeSpecName: "kube-api-access-q9j6n") pod "0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09" (UID: "0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09"). InnerVolumeSpecName "kube-api-access-q9j6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.091956 4718 generic.go:334] "Generic (PLEG): container finished" podID="817285ad-e89c-4123-b42a-b622631062cd" containerID="b77c7722db9d0362705fda26bd987db0c0e60b0d4ac66493558864009d090ad1" exitCode=1 Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.092069 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" event={"ID":"817285ad-e89c-4123-b42a-b622631062cd","Type":"ContainerDied","Data":"b77c7722db9d0362705fda26bd987db0c0e60b0d4ac66493558864009d090ad1"} Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.104291 4718 scope.go:117] "RemoveContainer" containerID="b77c7722db9d0362705fda26bd987db0c0e60b0d4ac66493558864009d090ad1" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.109653 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-573f-account-create-update-ktn8s" event={"ID":"ab905731-cf34-4768-9cef-c35c3bed8f22","Type":"ContainerDied","Data":"811277b89e7bddd1b2518687fe0e120f0e7efe139676d148de0d6ebf9dea2e1c"} Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.109697 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="811277b89e7bddd1b2518687fe0e120f0e7efe139676d148de0d6ebf9dea2e1c" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.109782 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-573f-account-create-update-ktn8s" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.111521 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7995cbf6b7-gc2m4" podStartSLOduration=7.869582159 podStartE2EDuration="13.11149988s" podCreationTimestamp="2026-01-23 16:38:55 +0000 UTC" firstStartedPulling="2026-01-23 16:38:57.802973044 +0000 UTC m=+1338.950215035" lastFinishedPulling="2026-01-23 16:39:03.044890765 +0000 UTC m=+1344.192132756" observedRunningTime="2026-01-23 16:39:08.100782779 +0000 UTC m=+1349.248024780" watchObservedRunningTime="2026-01-23 16:39:08.11149988 +0000 UTC m=+1349.258741871" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.169167 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab905731-cf34-4768-9cef-c35c3bed8f22-operator-scripts\") pod \"ab905731-cf34-4768-9cef-c35c3bed8f22\" (UID: \"ab905731-cf34-4768-9cef-c35c3bed8f22\") " Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.169370 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m72r\" (UniqueName: \"kubernetes.io/projected/ab905731-cf34-4768-9cef-c35c3bed8f22-kube-api-access-9m72r\") pod \"ab905731-cf34-4768-9cef-c35c3bed8f22\" (UID: \"ab905731-cf34-4768-9cef-c35c3bed8f22\") " Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.170216 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9j6n\" (UniqueName: \"kubernetes.io/projected/0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09-kube-api-access-q9j6n\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.170248 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.170971 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab905731-cf34-4768-9cef-c35c3bed8f22-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab905731-cf34-4768-9cef-c35c3bed8f22" (UID: "ab905731-cf34-4768-9cef-c35c3bed8f22"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.173952 4718 generic.go:334] "Generic (PLEG): container finished" podID="d454d95a-f920-46da-9b19-70d3b987d808" containerID="b169b4084d813f4c4209f67c653fe48d6df29ce0761b44ec50ea030a6dda81cf" exitCode=0 Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.174187 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c3f8-account-create-update-pdfm4" event={"ID":"d454d95a-f920-46da-9b19-70d3b987d808","Type":"ContainerDied","Data":"b169b4084d813f4c4209f67c653fe48d6df29ce0761b44ec50ea030a6dda81cf"} Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.182512 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab905731-cf34-4768-9cef-c35c3bed8f22-kube-api-access-9m72r" (OuterVolumeSpecName: "kube-api-access-9m72r") pod "ab905731-cf34-4768-9cef-c35c3bed8f22" (UID: "ab905731-cf34-4768-9cef-c35c3bed8f22"). InnerVolumeSpecName "kube-api-access-9m72r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.186190 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22073987-e907-4f8b-95a3-bf9534ce1a38","Type":"ContainerStarted","Data":"91bdea5fa773a735ec6cffdc4316d159f613d12304424302fdf206dadc465737"} Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.189346 4718 scope.go:117] "RemoveContainer" containerID="cf0d6af341af21578449367007352016772bff652c51df10c7dc63b5fc246f0a" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.189460 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-844bdc55c5-4kc9t" event={"ID":"d1f0df76-9570-4514-8a62-3986fb03976b","Type":"ContainerStarted","Data":"cf0d6af341af21578449367007352016772bff652c51df10c7dc63b5fc246f0a"} Jan 23 16:39:08 crc kubenswrapper[4718]: E0123 16:39:08.189654 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-844bdc55c5-4kc9t_openstack(d1f0df76-9570-4514-8a62-3986fb03976b)\"" pod="openstack/heat-api-844bdc55c5-4kc9t" podUID="d1f0df76-9570-4514-8a62-3986fb03976b" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.202994 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bthbv" event={"ID":"0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09","Type":"ContainerDied","Data":"3ef42f0bfb89940f7b394dd19d588e1f96932fea8e39bacbbeb61563816802b8"} Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.203299 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ef42f0bfb89940f7b394dd19d588e1f96932fea8e39bacbbeb61563816802b8" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.203427 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bthbv" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.211800 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8rn8k" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.216001 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3948da45-04b4-4a32-b5d5-0701d87095a7","Type":"ContainerStarted","Data":"066d1ef4ab70b96ef8a9171245ffc38b18e202c017523ecec47cd97d8d4eb141"} Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.272659 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m72r\" (UniqueName: \"kubernetes.io/projected/ab905731-cf34-4768-9cef-c35c3bed8f22-kube-api-access-9m72r\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.273114 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab905731-cf34-4768-9cef-c35c3bed8f22-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.531919 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.548881 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.557525 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.614875 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-w4hbk" podUID="b2153bde-84f6-45c0-9b35-e7b4943cbcee" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.215:5353: i/o timeout" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.844026 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9952-account-create-update-x5sd9" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.894446 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmcpv\" (UniqueName: \"kubernetes.io/projected/c80a9db0-876a-4ba3-a5ce-018220181097-kube-api-access-wmcpv\") pod \"c80a9db0-876a-4ba3-a5ce-018220181097\" (UID: \"c80a9db0-876a-4ba3-a5ce-018220181097\") " Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.894668 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c80a9db0-876a-4ba3-a5ce-018220181097-operator-scripts\") pod \"c80a9db0-876a-4ba3-a5ce-018220181097\" (UID: \"c80a9db0-876a-4ba3-a5ce-018220181097\") " Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.895482 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c80a9db0-876a-4ba3-a5ce-018220181097-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c80a9db0-876a-4ba3-a5ce-018220181097" (UID: "c80a9db0-876a-4ba3-a5ce-018220181097"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.901346 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c80a9db0-876a-4ba3-a5ce-018220181097-kube-api-access-wmcpv" (OuterVolumeSpecName: "kube-api-access-wmcpv") pod "c80a9db0-876a-4ba3-a5ce-018220181097" (UID: "c80a9db0-876a-4ba3-a5ce-018220181097"). InnerVolumeSpecName "kube-api-access-wmcpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.997366 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmcpv\" (UniqueName: \"kubernetes.io/projected/c80a9db0-876a-4ba3-a5ce-018220181097-kube-api-access-wmcpv\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:08 crc kubenswrapper[4718]: I0123 16:39:08.997877 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c80a9db0-876a-4ba3-a5ce-018220181097-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.224598 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3948da45-04b4-4a32-b5d5-0701d87095a7","Type":"ContainerStarted","Data":"291e338668612bd5e58d97b30651204c9e6f2719f3396c498ac5129f191a387e"} Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.226466 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9952-account-create-update-x5sd9" event={"ID":"c80a9db0-876a-4ba3-a5ce-018220181097","Type":"ContainerDied","Data":"aa83c890718647e0eb13655a70ce953dfb003ba7592c8f897e6d4c57c037abb1"} Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.226536 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa83c890718647e0eb13655a70ce953dfb003ba7592c8f897e6d4c57c037abb1" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.226490 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9952-account-create-update-x5sd9" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.229993 4718 generic.go:334] "Generic (PLEG): container finished" podID="817285ad-e89c-4123-b42a-b622631062cd" containerID="f3a209625560e3d6fa9cf323144e8c8276bb44c1dec7a6110f29b0633b8a3813" exitCode=1 Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.230062 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" event={"ID":"817285ad-e89c-4123-b42a-b622631062cd","Type":"ContainerDied","Data":"f3a209625560e3d6fa9cf323144e8c8276bb44c1dec7a6110f29b0633b8a3813"} Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.230095 4718 scope.go:117] "RemoveContainer" containerID="b77c7722db9d0362705fda26bd987db0c0e60b0d4ac66493558864009d090ad1" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.232895 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22073987-e907-4f8b-95a3-bf9534ce1a38","Type":"ContainerStarted","Data":"c5ffc4d58bf4ea52dc546e1283ecf4100246b247f5fb3d71dffe2148a36973e4"} Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.236518 4718 generic.go:334] "Generic (PLEG): container finished" podID="d1f0df76-9570-4514-8a62-3986fb03976b" containerID="cf0d6af341af21578449367007352016772bff652c51df10c7dc63b5fc246f0a" exitCode=1 Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.236606 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-844bdc55c5-4kc9t" event={"ID":"d1f0df76-9570-4514-8a62-3986fb03976b","Type":"ContainerDied","Data":"cf0d6af341af21578449367007352016772bff652c51df10c7dc63b5fc246f0a"} Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.237545 4718 scope.go:117] "RemoveContainer" containerID="cf0d6af341af21578449367007352016772bff652c51df10c7dc63b5fc246f0a" Jan 23 16:39:09 crc kubenswrapper[4718]: E0123 16:39:09.237976 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-844bdc55c5-4kc9t_openstack(d1f0df76-9570-4514-8a62-3986fb03976b)\"" pod="openstack/heat-api-844bdc55c5-4kc9t" podUID="d1f0df76-9570-4514-8a62-3986fb03976b" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.241937 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ed1d44ad-8796-452b-a194-17b351fc8c01","Type":"ContainerStarted","Data":"e2cdc8c05465b29a22859b46c981214d38ff1c006ff08d8cc42de8e4e54db97b"} Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.252963 4718 scope.go:117] "RemoveContainer" containerID="f3a209625560e3d6fa9cf323144e8c8276bb44c1dec7a6110f29b0633b8a3813" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.308791 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.30876643 podStartE2EDuration="6.30876643s" podCreationTimestamp="2026-01-23 16:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:39:09.266225366 +0000 UTC m=+1350.413467367" watchObservedRunningTime="2026-01-23 16:39:09.30876643 +0000 UTC m=+1350.456008421" Jan 23 16:39:09 crc kubenswrapper[4718]: E0123 16:39:09.317074 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6cb6c6d8c4-hwj8v_openstack(817285ad-e89c-4123-b42a-b622631062cd)\"" pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" podUID="817285ad-e89c-4123-b42a-b622631062cd" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.440202 4718 scope.go:117] "RemoveContainer" containerID="5db41c001d86735ebb0cfa263207ea0f1985545adab30dcbb2b98981eb778078" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.614932 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.925940474 podStartE2EDuration="34.614909217s" podCreationTimestamp="2026-01-23 16:38:35 +0000 UTC" firstStartedPulling="2026-01-23 16:38:36.176160451 +0000 UTC m=+1317.323402442" lastFinishedPulling="2026-01-23 16:39:07.865129184 +0000 UTC m=+1349.012371185" observedRunningTime="2026-01-23 16:39:09.437230586 +0000 UTC m=+1350.584472577" watchObservedRunningTime="2026-01-23 16:39:09.614909217 +0000 UTC m=+1350.762151208" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.630171 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-g6vj6"] Jan 23 16:39:09 crc kubenswrapper[4718]: E0123 16:39:09.630869 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09" containerName="mariadb-database-create" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.630885 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09" containerName="mariadb-database-create" Jan 23 16:39:09 crc kubenswrapper[4718]: E0123 16:39:09.630909 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2153bde-84f6-45c0-9b35-e7b4943cbcee" containerName="init" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.630915 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2153bde-84f6-45c0-9b35-e7b4943cbcee" containerName="init" Jan 23 16:39:09 crc kubenswrapper[4718]: E0123 16:39:09.630929 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2153bde-84f6-45c0-9b35-e7b4943cbcee" containerName="dnsmasq-dns" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.630935 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2153bde-84f6-45c0-9b35-e7b4943cbcee" containerName="dnsmasq-dns" Jan 23 16:39:09 crc kubenswrapper[4718]: E0123 16:39:09.630954 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f14be9-3ab8-4e54-852d-82a373d11028" containerName="glance-log" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.630959 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f14be9-3ab8-4e54-852d-82a373d11028" containerName="glance-log" Jan 23 16:39:09 crc kubenswrapper[4718]: E0123 16:39:09.630967 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328feadc-ef28-4714-acfc-a10826fb69f6" containerName="mariadb-database-create" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.630972 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="328feadc-ef28-4714-acfc-a10826fb69f6" containerName="mariadb-database-create" Jan 23 16:39:09 crc kubenswrapper[4718]: E0123 16:39:09.630989 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c80a9db0-876a-4ba3-a5ce-018220181097" containerName="mariadb-account-create-update" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.630995 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c80a9db0-876a-4ba3-a5ce-018220181097" containerName="mariadb-account-create-update" Jan 23 16:39:09 crc kubenswrapper[4718]: E0123 16:39:09.631006 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab905731-cf34-4768-9cef-c35c3bed8f22" containerName="mariadb-account-create-update" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.631012 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab905731-cf34-4768-9cef-c35c3bed8f22" containerName="mariadb-account-create-update" Jan 23 16:39:09 crc kubenswrapper[4718]: E0123 16:39:09.631032 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f14be9-3ab8-4e54-852d-82a373d11028" containerName="glance-httpd" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.631038 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f14be9-3ab8-4e54-852d-82a373d11028" containerName="glance-httpd" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.631244 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab905731-cf34-4768-9cef-c35c3bed8f22" containerName="mariadb-account-create-update" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.637227 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f14be9-3ab8-4e54-852d-82a373d11028" containerName="glance-log" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.637291 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09" containerName="mariadb-database-create" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.637302 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c80a9db0-876a-4ba3-a5ce-018220181097" containerName="mariadb-account-create-update" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.637321 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2153bde-84f6-45c0-9b35-e7b4943cbcee" containerName="dnsmasq-dns" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.637353 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="328feadc-ef28-4714-acfc-a10826fb69f6" containerName="mariadb-database-create" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.637373 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f14be9-3ab8-4e54-852d-82a373d11028" containerName="glance-httpd" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.638581 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-g6vj6" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.648557 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.649569 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cr9h5" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.662013 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.709702 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-g6vj6"] Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.743349 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506a04e5-5008-4415-9afd-6ccc208f9dd4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-g6vj6\" (UID: \"506a04e5-5008-4415-9afd-6ccc208f9dd4\") " pod="openstack/nova-cell0-conductor-db-sync-g6vj6" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.743616 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mfb8\" (UniqueName: \"kubernetes.io/projected/506a04e5-5008-4415-9afd-6ccc208f9dd4-kube-api-access-9mfb8\") pod \"nova-cell0-conductor-db-sync-g6vj6\" (UID: \"506a04e5-5008-4415-9afd-6ccc208f9dd4\") " pod="openstack/nova-cell0-conductor-db-sync-g6vj6" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.744228 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506a04e5-5008-4415-9afd-6ccc208f9dd4-config-data\") pod \"nova-cell0-conductor-db-sync-g6vj6\" (UID: \"506a04e5-5008-4415-9afd-6ccc208f9dd4\") " pod="openstack/nova-cell0-conductor-db-sync-g6vj6" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.744344 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506a04e5-5008-4415-9afd-6ccc208f9dd4-scripts\") pod \"nova-cell0-conductor-db-sync-g6vj6\" (UID: \"506a04e5-5008-4415-9afd-6ccc208f9dd4\") " pod="openstack/nova-cell0-conductor-db-sync-g6vj6" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.854093 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506a04e5-5008-4415-9afd-6ccc208f9dd4-config-data\") pod \"nova-cell0-conductor-db-sync-g6vj6\" (UID: \"506a04e5-5008-4415-9afd-6ccc208f9dd4\") " pod="openstack/nova-cell0-conductor-db-sync-g6vj6" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.854161 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506a04e5-5008-4415-9afd-6ccc208f9dd4-scripts\") pod \"nova-cell0-conductor-db-sync-g6vj6\" (UID: \"506a04e5-5008-4415-9afd-6ccc208f9dd4\") " pod="openstack/nova-cell0-conductor-db-sync-g6vj6" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.854207 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506a04e5-5008-4415-9afd-6ccc208f9dd4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-g6vj6\" (UID: \"506a04e5-5008-4415-9afd-6ccc208f9dd4\") " pod="openstack/nova-cell0-conductor-db-sync-g6vj6" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.854264 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mfb8\" (UniqueName: \"kubernetes.io/projected/506a04e5-5008-4415-9afd-6ccc208f9dd4-kube-api-access-9mfb8\") pod \"nova-cell0-conductor-db-sync-g6vj6\" (UID: \"506a04e5-5008-4415-9afd-6ccc208f9dd4\") " pod="openstack/nova-cell0-conductor-db-sync-g6vj6" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.863697 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506a04e5-5008-4415-9afd-6ccc208f9dd4-scripts\") pod \"nova-cell0-conductor-db-sync-g6vj6\" (UID: \"506a04e5-5008-4415-9afd-6ccc208f9dd4\") " pod="openstack/nova-cell0-conductor-db-sync-g6vj6" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.863721 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506a04e5-5008-4415-9afd-6ccc208f9dd4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-g6vj6\" (UID: \"506a04e5-5008-4415-9afd-6ccc208f9dd4\") " pod="openstack/nova-cell0-conductor-db-sync-g6vj6" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.864806 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506a04e5-5008-4415-9afd-6ccc208f9dd4-config-data\") pod \"nova-cell0-conductor-db-sync-g6vj6\" (UID: \"506a04e5-5008-4415-9afd-6ccc208f9dd4\") " pod="openstack/nova-cell0-conductor-db-sync-g6vj6" Jan 23 16:39:09 crc kubenswrapper[4718]: I0123 16:39:09.911772 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mfb8\" (UniqueName: \"kubernetes.io/projected/506a04e5-5008-4415-9afd-6ccc208f9dd4-kube-api-access-9mfb8\") pod \"nova-cell0-conductor-db-sync-g6vj6\" (UID: \"506a04e5-5008-4415-9afd-6ccc208f9dd4\") " pod="openstack/nova-cell0-conductor-db-sync-g6vj6" Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.003393 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-g6vj6" Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.025846 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c3f8-account-create-update-pdfm4" Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.129618 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kwhc9" Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.162476 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpv94\" (UniqueName: \"kubernetes.io/projected/d454d95a-f920-46da-9b19-70d3b987d808-kube-api-access-jpv94\") pod \"d454d95a-f920-46da-9b19-70d3b987d808\" (UID: \"d454d95a-f920-46da-9b19-70d3b987d808\") " Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.162611 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d454d95a-f920-46da-9b19-70d3b987d808-operator-scripts\") pod \"d454d95a-f920-46da-9b19-70d3b987d808\" (UID: \"d454d95a-f920-46da-9b19-70d3b987d808\") " Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.163885 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d454d95a-f920-46da-9b19-70d3b987d808-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d454d95a-f920-46da-9b19-70d3b987d808" (UID: "d454d95a-f920-46da-9b19-70d3b987d808"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.179113 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d454d95a-f920-46da-9b19-70d3b987d808-kube-api-access-jpv94" (OuterVolumeSpecName: "kube-api-access-jpv94") pod "d454d95a-f920-46da-9b19-70d3b987d808" (UID: "d454d95a-f920-46da-9b19-70d3b987d808"). InnerVolumeSpecName "kube-api-access-jpv94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.268492 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsbnk\" (UniqueName: \"kubernetes.io/projected/77678c25-65e5-449f-b905-c8732eb45518-kube-api-access-tsbnk\") pod \"77678c25-65e5-449f-b905-c8732eb45518\" (UID: \"77678c25-65e5-449f-b905-c8732eb45518\") " Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.274808 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77678c25-65e5-449f-b905-c8732eb45518-operator-scripts\") pod \"77678c25-65e5-449f-b905-c8732eb45518\" (UID: \"77678c25-65e5-449f-b905-c8732eb45518\") " Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.275331 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77678c25-65e5-449f-b905-c8732eb45518-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77678c25-65e5-449f-b905-c8732eb45518" (UID: "77678c25-65e5-449f-b905-c8732eb45518"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.276137 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpv94\" (UniqueName: \"kubernetes.io/projected/d454d95a-f920-46da-9b19-70d3b987d808-kube-api-access-jpv94\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.276157 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77678c25-65e5-449f-b905-c8732eb45518-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.276166 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d454d95a-f920-46da-9b19-70d3b987d808-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.278173 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77678c25-65e5-449f-b905-c8732eb45518-kube-api-access-tsbnk" (OuterVolumeSpecName: "kube-api-access-tsbnk") pod "77678c25-65e5-449f-b905-c8732eb45518" (UID: "77678c25-65e5-449f-b905-c8732eb45518"). InnerVolumeSpecName "kube-api-access-tsbnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.295053 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c3f8-account-create-update-pdfm4" event={"ID":"d454d95a-f920-46da-9b19-70d3b987d808","Type":"ContainerDied","Data":"be0144fb94fbd14f9b37ee2f377bf00202097e88e3c10f4c1c1ae5137590fc43"} Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.295107 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be0144fb94fbd14f9b37ee2f377bf00202097e88e3c10f4c1c1ae5137590fc43" Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.295275 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c3f8-account-create-update-pdfm4" Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.309247 4718 scope.go:117] "RemoveContainer" containerID="cf0d6af341af21578449367007352016772bff652c51df10c7dc63b5fc246f0a" Jan 23 16:39:10 crc kubenswrapper[4718]: E0123 16:39:10.309505 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-844bdc55c5-4kc9t_openstack(d1f0df76-9570-4514-8a62-3986fb03976b)\"" pod="openstack/heat-api-844bdc55c5-4kc9t" podUID="d1f0df76-9570-4514-8a62-3986fb03976b" Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.322593 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kwhc9" Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.325519 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kwhc9" event={"ID":"77678c25-65e5-449f-b905-c8732eb45518","Type":"ContainerDied","Data":"de87eefcaa3b30ea5284f61d7a3c4c4dffeb0eff27d353e488f5f815e7d15a7e"} Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.325596 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de87eefcaa3b30ea5284f61d7a3c4c4dffeb0eff27d353e488f5f815e7d15a7e" Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.346397 4718 scope.go:117] "RemoveContainer" containerID="f3a209625560e3d6fa9cf323144e8c8276bb44c1dec7a6110f29b0633b8a3813" Jan 23 16:39:10 crc kubenswrapper[4718]: E0123 16:39:10.346610 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6cb6c6d8c4-hwj8v_openstack(817285ad-e89c-4123-b42a-b622631062cd)\"" pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" podUID="817285ad-e89c-4123-b42a-b622631062cd" Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.383155 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsbnk\" (UniqueName: \"kubernetes.io/projected/77678c25-65e5-449f-b905-c8732eb45518-kube-api-access-tsbnk\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:10 crc kubenswrapper[4718]: W0123 16:39:10.648586 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod506a04e5_5008_4415_9afd_6ccc208f9dd4.slice/crio-5db5a9612b32f639b4141c93126168861ec41f10b275117733807392b13f4207 WatchSource:0}: Error finding container 5db5a9612b32f639b4141c93126168861ec41f10b275117733807392b13f4207: Status 404 returned error can't find the container with id 5db5a9612b32f639b4141c93126168861ec41f10b275117733807392b13f4207 Jan 23 16:39:10 crc kubenswrapper[4718]: I0123 16:39:10.663870 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-g6vj6"] Jan 23 16:39:11 crc kubenswrapper[4718]: I0123 16:39:11.373069 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-g6vj6" event={"ID":"506a04e5-5008-4415-9afd-6ccc208f9dd4","Type":"ContainerStarted","Data":"5db5a9612b32f639b4141c93126168861ec41f10b275117733807392b13f4207"} Jan 23 16:39:12 crc kubenswrapper[4718]: I0123 16:39:12.389942 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22073987-e907-4f8b-95a3-bf9534ce1a38","Type":"ContainerStarted","Data":"37ddf9aeb7fec814db9a5c42a88fb7b6f3a2a847c0ba7c749857170194321fd8"} Jan 23 16:39:12 crc kubenswrapper[4718]: I0123 16:39:12.390565 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerName="ceilometer-central-agent" containerID="cri-o://bf51ba6bf121527004bc681253120b03016f9cc43c2001cc5981221c611e1486" gracePeriod=30 Jan 23 16:39:12 crc kubenswrapper[4718]: I0123 16:39:12.390677 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 16:39:12 crc kubenswrapper[4718]: I0123 16:39:12.391143 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerName="proxy-httpd" containerID="cri-o://37ddf9aeb7fec814db9a5c42a88fb7b6f3a2a847c0ba7c749857170194321fd8" gracePeriod=30 Jan 23 16:39:12 crc kubenswrapper[4718]: I0123 16:39:12.391186 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerName="sg-core" containerID="cri-o://c5ffc4d58bf4ea52dc546e1283ecf4100246b247f5fb3d71dffe2148a36973e4" gracePeriod=30 Jan 23 16:39:12 crc kubenswrapper[4718]: I0123 16:39:12.391223 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerName="ceilometer-notification-agent" containerID="cri-o://91bdea5fa773a735ec6cffdc4316d159f613d12304424302fdf206dadc465737" gracePeriod=30 Jan 23 16:39:12 crc kubenswrapper[4718]: I0123 16:39:12.427734 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=14.59315849 podStartE2EDuration="29.427709979s" podCreationTimestamp="2026-01-23 16:38:43 +0000 UTC" firstStartedPulling="2026-01-23 16:38:56.683912016 +0000 UTC m=+1337.831154007" lastFinishedPulling="2026-01-23 16:39:11.518463505 +0000 UTC m=+1352.665705496" observedRunningTime="2026-01-23 16:39:12.411194112 +0000 UTC m=+1353.558436103" watchObservedRunningTime="2026-01-23 16:39:12.427709979 +0000 UTC m=+1353.574951970" Jan 23 16:39:13 crc kubenswrapper[4718]: I0123 16:39:13.435414 4718 generic.go:334] "Generic (PLEG): container finished" podID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerID="37ddf9aeb7fec814db9a5c42a88fb7b6f3a2a847c0ba7c749857170194321fd8" exitCode=0 Jan 23 16:39:13 crc kubenswrapper[4718]: I0123 16:39:13.435970 4718 generic.go:334] "Generic (PLEG): container finished" podID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerID="c5ffc4d58bf4ea52dc546e1283ecf4100246b247f5fb3d71dffe2148a36973e4" exitCode=2 Jan 23 16:39:13 crc kubenswrapper[4718]: I0123 16:39:13.435983 4718 generic.go:334] "Generic (PLEG): container finished" podID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerID="bf51ba6bf121527004bc681253120b03016f9cc43c2001cc5981221c611e1486" exitCode=0 Jan 23 16:39:13 crc kubenswrapper[4718]: I0123 16:39:13.435539 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22073987-e907-4f8b-95a3-bf9534ce1a38","Type":"ContainerDied","Data":"37ddf9aeb7fec814db9a5c42a88fb7b6f3a2a847c0ba7c749857170194321fd8"} Jan 23 16:39:13 crc kubenswrapper[4718]: I0123 16:39:13.436031 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22073987-e907-4f8b-95a3-bf9534ce1a38","Type":"ContainerDied","Data":"c5ffc4d58bf4ea52dc546e1283ecf4100246b247f5fb3d71dffe2148a36973e4"} Jan 23 16:39:13 crc kubenswrapper[4718]: I0123 16:39:13.436049 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22073987-e907-4f8b-95a3-bf9534ce1a38","Type":"ContainerDied","Data":"bf51ba6bf121527004bc681253120b03016f9cc43c2001cc5981221c611e1486"} Jan 23 16:39:13 crc kubenswrapper[4718]: I0123 16:39:13.443866 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7b6b6b4c97-tcz2k" Jan 23 16:39:13 crc kubenswrapper[4718]: I0123 16:39:13.507151 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7cd7745668-858qv"] Jan 23 16:39:13 crc kubenswrapper[4718]: I0123 16:39:13.507445 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7cd7745668-858qv" podUID="0e281000-f117-4ab6-8a56-a741d57ac660" containerName="heat-engine" containerID="cri-o://3da9a3a9bda8d6bdebf999d5c15de01204ff4d5262558a9677077ab2a2d080b2" gracePeriod=60 Jan 23 16:39:13 crc kubenswrapper[4718]: I0123 16:39:13.533563 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:39:13 crc kubenswrapper[4718]: I0123 16:39:13.534566 4718 scope.go:117] "RemoveContainer" containerID="f3a209625560e3d6fa9cf323144e8c8276bb44c1dec7a6110f29b0633b8a3813" Jan 23 16:39:13 crc kubenswrapper[4718]: E0123 16:39:13.534997 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6cb6c6d8c4-hwj8v_openstack(817285ad-e89c-4123-b42a-b622631062cd)\"" pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" podUID="817285ad-e89c-4123-b42a-b622631062cd" Jan 23 16:39:13 crc kubenswrapper[4718]: I0123 16:39:13.535308 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:39:13 crc kubenswrapper[4718]: I0123 16:39:13.685073 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:39:13 crc kubenswrapper[4718]: I0123 16:39:13.767871 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-844bdc55c5-4kc9t"] Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.248866 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.250687 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.323243 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.341495 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.399158 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.447549 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-combined-ca-bundle\") pod \"22073987-e907-4f8b-95a3-bf9534ce1a38\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.447709 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-sg-core-conf-yaml\") pod \"22073987-e907-4f8b-95a3-bf9534ce1a38\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.447825 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-scripts\") pod \"22073987-e907-4f8b-95a3-bf9534ce1a38\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.447956 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh9jc\" (UniqueName: \"kubernetes.io/projected/22073987-e907-4f8b-95a3-bf9534ce1a38-kube-api-access-hh9jc\") pod \"22073987-e907-4f8b-95a3-bf9534ce1a38\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.448105 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-config-data\") pod \"22073987-e907-4f8b-95a3-bf9534ce1a38\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.448133 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22073987-e907-4f8b-95a3-bf9534ce1a38-run-httpd\") pod \"22073987-e907-4f8b-95a3-bf9534ce1a38\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.448203 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22073987-e907-4f8b-95a3-bf9534ce1a38-log-httpd\") pod \"22073987-e907-4f8b-95a3-bf9534ce1a38\" (UID: \"22073987-e907-4f8b-95a3-bf9534ce1a38\") " Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.451361 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22073987-e907-4f8b-95a3-bf9534ce1a38-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "22073987-e907-4f8b-95a3-bf9534ce1a38" (UID: "22073987-e907-4f8b-95a3-bf9534ce1a38"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.462951 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-scripts" (OuterVolumeSpecName: "scripts") pod "22073987-e907-4f8b-95a3-bf9534ce1a38" (UID: "22073987-e907-4f8b-95a3-bf9534ce1a38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.464596 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22073987-e907-4f8b-95a3-bf9534ce1a38-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "22073987-e907-4f8b-95a3-bf9534ce1a38" (UID: "22073987-e907-4f8b-95a3-bf9534ce1a38"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.467698 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22073987-e907-4f8b-95a3-bf9534ce1a38-kube-api-access-hh9jc" (OuterVolumeSpecName: "kube-api-access-hh9jc") pod "22073987-e907-4f8b-95a3-bf9534ce1a38" (UID: "22073987-e907-4f8b-95a3-bf9534ce1a38"). InnerVolumeSpecName "kube-api-access-hh9jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.514785 4718 generic.go:334] "Generic (PLEG): container finished" podID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerID="91bdea5fa773a735ec6cffdc4316d159f613d12304424302fdf206dadc465737" exitCode=0 Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.515275 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22073987-e907-4f8b-95a3-bf9534ce1a38","Type":"ContainerDied","Data":"91bdea5fa773a735ec6cffdc4316d159f613d12304424302fdf206dadc465737"} Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.515317 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22073987-e907-4f8b-95a3-bf9534ce1a38","Type":"ContainerDied","Data":"5913e3d36def17b3cd5d19b1bac3c511a17b26441a4e83f8b49399c1db6f2f3f"} Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.515337 4718 scope.go:117] "RemoveContainer" containerID="37ddf9aeb7fec814db9a5c42a88fb7b6f3a2a847c0ba7c749857170194321fd8" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.516810 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.516946 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.517101 4718 scope.go:117] "RemoveContainer" containerID="f3a209625560e3d6fa9cf323144e8c8276bb44c1dec7a6110f29b0633b8a3813" Jan 23 16:39:14 crc kubenswrapper[4718]: E0123 16:39:14.517330 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6cb6c6d8c4-hwj8v_openstack(817285ad-e89c-4123-b42a-b622631062cd)\"" pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" podUID="817285ad-e89c-4123-b42a-b622631062cd" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.520347 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.531922 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.551894 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh9jc\" (UniqueName: \"kubernetes.io/projected/22073987-e907-4f8b-95a3-bf9534ce1a38-kube-api-access-hh9jc\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.551928 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22073987-e907-4f8b-95a3-bf9534ce1a38-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.551940 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22073987-e907-4f8b-95a3-bf9534ce1a38-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.551947 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.554499 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "22073987-e907-4f8b-95a3-bf9534ce1a38" (UID: "22073987-e907-4f8b-95a3-bf9534ce1a38"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.657232 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.663524 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.669611 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22073987-e907-4f8b-95a3-bf9534ce1a38" (UID: "22073987-e907-4f8b-95a3-bf9534ce1a38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.704273 4718 scope.go:117] "RemoveContainer" containerID="c5ffc4d58bf4ea52dc546e1283ecf4100246b247f5fb3d71dffe2148a36973e4" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.713232 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-config-data" (OuterVolumeSpecName: "config-data") pod "22073987-e907-4f8b-95a3-bf9534ce1a38" (UID: "22073987-e907-4f8b-95a3-bf9534ce1a38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.761571 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f0df76-9570-4514-8a62-3986fb03976b-config-data\") pod \"d1f0df76-9570-4514-8a62-3986fb03976b\" (UID: \"d1f0df76-9570-4514-8a62-3986fb03976b\") " Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.761787 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1f0df76-9570-4514-8a62-3986fb03976b-config-data-custom\") pod \"d1f0df76-9570-4514-8a62-3986fb03976b\" (UID: \"d1f0df76-9570-4514-8a62-3986fb03976b\") " Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.761905 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jnj6\" (UniqueName: \"kubernetes.io/projected/d1f0df76-9570-4514-8a62-3986fb03976b-kube-api-access-5jnj6\") pod \"d1f0df76-9570-4514-8a62-3986fb03976b\" (UID: \"d1f0df76-9570-4514-8a62-3986fb03976b\") " Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.762054 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f0df76-9570-4514-8a62-3986fb03976b-combined-ca-bundle\") pod \"d1f0df76-9570-4514-8a62-3986fb03976b\" (UID: \"d1f0df76-9570-4514-8a62-3986fb03976b\") " Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.763217 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.763233 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22073987-e907-4f8b-95a3-bf9534ce1a38-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.763744 4718 scope.go:117] "RemoveContainer" containerID="91bdea5fa773a735ec6cffdc4316d159f613d12304424302fdf206dadc465737" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.773439 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f0df76-9570-4514-8a62-3986fb03976b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d1f0df76-9570-4514-8a62-3986fb03976b" (UID: "d1f0df76-9570-4514-8a62-3986fb03976b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.777809 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f0df76-9570-4514-8a62-3986fb03976b-kube-api-access-5jnj6" (OuterVolumeSpecName: "kube-api-access-5jnj6") pod "d1f0df76-9570-4514-8a62-3986fb03976b" (UID: "d1f0df76-9570-4514-8a62-3986fb03976b"). InnerVolumeSpecName "kube-api-access-5jnj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.801881 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f0df76-9570-4514-8a62-3986fb03976b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1f0df76-9570-4514-8a62-3986fb03976b" (UID: "d1f0df76-9570-4514-8a62-3986fb03976b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.826121 4718 scope.go:117] "RemoveContainer" containerID="bf51ba6bf121527004bc681253120b03016f9cc43c2001cc5981221c611e1486" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.836342 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.839071 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f0df76-9570-4514-8a62-3986fb03976b-config-data" (OuterVolumeSpecName: "config-data") pod "d1f0df76-9570-4514-8a62-3986fb03976b" (UID: "d1f0df76-9570-4514-8a62-3986fb03976b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.869071 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f0df76-9570-4514-8a62-3986fb03976b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.869112 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f0df76-9570-4514-8a62-3986fb03976b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.869122 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1f0df76-9570-4514-8a62-3986fb03976b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.869132 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jnj6\" (UniqueName: \"kubernetes.io/projected/d1f0df76-9570-4514-8a62-3986fb03976b-kube-api-access-5jnj6\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.873541 4718 scope.go:117] "RemoveContainer" containerID="37ddf9aeb7fec814db9a5c42a88fb7b6f3a2a847c0ba7c749857170194321fd8" Jan 23 16:39:14 crc kubenswrapper[4718]: E0123 16:39:14.874856 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37ddf9aeb7fec814db9a5c42a88fb7b6f3a2a847c0ba7c749857170194321fd8\": container with ID starting with 37ddf9aeb7fec814db9a5c42a88fb7b6f3a2a847c0ba7c749857170194321fd8 not found: ID does not exist" containerID="37ddf9aeb7fec814db9a5c42a88fb7b6f3a2a847c0ba7c749857170194321fd8" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.874892 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37ddf9aeb7fec814db9a5c42a88fb7b6f3a2a847c0ba7c749857170194321fd8"} err="failed to get container status \"37ddf9aeb7fec814db9a5c42a88fb7b6f3a2a847c0ba7c749857170194321fd8\": rpc error: code = NotFound desc = could not find container \"37ddf9aeb7fec814db9a5c42a88fb7b6f3a2a847c0ba7c749857170194321fd8\": container with ID starting with 37ddf9aeb7fec814db9a5c42a88fb7b6f3a2a847c0ba7c749857170194321fd8 not found: ID does not exist" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.874911 4718 scope.go:117] "RemoveContainer" containerID="c5ffc4d58bf4ea52dc546e1283ecf4100246b247f5fb3d71dffe2148a36973e4" Jan 23 16:39:14 crc kubenswrapper[4718]: E0123 16:39:14.895823 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5ffc4d58bf4ea52dc546e1283ecf4100246b247f5fb3d71dffe2148a36973e4\": container with ID starting with c5ffc4d58bf4ea52dc546e1283ecf4100246b247f5fb3d71dffe2148a36973e4 not found: ID does not exist" containerID="c5ffc4d58bf4ea52dc546e1283ecf4100246b247f5fb3d71dffe2148a36973e4" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.895872 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5ffc4d58bf4ea52dc546e1283ecf4100246b247f5fb3d71dffe2148a36973e4"} err="failed to get container status \"c5ffc4d58bf4ea52dc546e1283ecf4100246b247f5fb3d71dffe2148a36973e4\": rpc error: code = NotFound desc = could not find container \"c5ffc4d58bf4ea52dc546e1283ecf4100246b247f5fb3d71dffe2148a36973e4\": container with ID starting with c5ffc4d58bf4ea52dc546e1283ecf4100246b247f5fb3d71dffe2148a36973e4 not found: ID does not exist" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.895906 4718 scope.go:117] "RemoveContainer" containerID="91bdea5fa773a735ec6cffdc4316d159f613d12304424302fdf206dadc465737" Jan 23 16:39:14 crc kubenswrapper[4718]: E0123 16:39:14.906656 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91bdea5fa773a735ec6cffdc4316d159f613d12304424302fdf206dadc465737\": container with ID starting with 91bdea5fa773a735ec6cffdc4316d159f613d12304424302fdf206dadc465737 not found: ID does not exist" containerID="91bdea5fa773a735ec6cffdc4316d159f613d12304424302fdf206dadc465737" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.906712 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91bdea5fa773a735ec6cffdc4316d159f613d12304424302fdf206dadc465737"} err="failed to get container status \"91bdea5fa773a735ec6cffdc4316d159f613d12304424302fdf206dadc465737\": rpc error: code = NotFound desc = could not find container \"91bdea5fa773a735ec6cffdc4316d159f613d12304424302fdf206dadc465737\": container with ID starting with 91bdea5fa773a735ec6cffdc4316d159f613d12304424302fdf206dadc465737 not found: ID does not exist" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.906738 4718 scope.go:117] "RemoveContainer" containerID="bf51ba6bf121527004bc681253120b03016f9cc43c2001cc5981221c611e1486" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.914668 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:39:14 crc kubenswrapper[4718]: E0123 16:39:14.920677 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf51ba6bf121527004bc681253120b03016f9cc43c2001cc5981221c611e1486\": container with ID starting with bf51ba6bf121527004bc681253120b03016f9cc43c2001cc5981221c611e1486 not found: ID does not exist" containerID="bf51ba6bf121527004bc681253120b03016f9cc43c2001cc5981221c611e1486" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.920728 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf51ba6bf121527004bc681253120b03016f9cc43c2001cc5981221c611e1486"} err="failed to get container status \"bf51ba6bf121527004bc681253120b03016f9cc43c2001cc5981221c611e1486\": rpc error: code = NotFound desc = could not find container \"bf51ba6bf121527004bc681253120b03016f9cc43c2001cc5981221c611e1486\": container with ID starting with bf51ba6bf121527004bc681253120b03016f9cc43c2001cc5981221c611e1486 not found: ID does not exist" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.931298 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6cb6c6d8c4-hwj8v"] Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.956238 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.972793 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:39:14 crc kubenswrapper[4718]: E0123 16:39:14.974092 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77678c25-65e5-449f-b905-c8732eb45518" containerName="mariadb-database-create" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.974112 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="77678c25-65e5-449f-b905-c8732eb45518" containerName="mariadb-database-create" Jan 23 16:39:14 crc kubenswrapper[4718]: E0123 16:39:14.974148 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerName="proxy-httpd" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.974159 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerName="proxy-httpd" Jan 23 16:39:14 crc kubenswrapper[4718]: E0123 16:39:14.974198 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerName="ceilometer-central-agent" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.974206 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerName="ceilometer-central-agent" Jan 23 16:39:14 crc kubenswrapper[4718]: E0123 16:39:14.974222 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f0df76-9570-4514-8a62-3986fb03976b" containerName="heat-api" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.974230 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f0df76-9570-4514-8a62-3986fb03976b" containerName="heat-api" Jan 23 16:39:14 crc kubenswrapper[4718]: E0123 16:39:14.974251 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d454d95a-f920-46da-9b19-70d3b987d808" containerName="mariadb-account-create-update" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.974259 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d454d95a-f920-46da-9b19-70d3b987d808" containerName="mariadb-account-create-update" Jan 23 16:39:14 crc kubenswrapper[4718]: E0123 16:39:14.974284 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerName="sg-core" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.974292 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerName="sg-core" Jan 23 16:39:14 crc kubenswrapper[4718]: E0123 16:39:14.974308 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerName="ceilometer-notification-agent" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.974315 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerName="ceilometer-notification-agent" Jan 23 16:39:14 crc kubenswrapper[4718]: E0123 16:39:14.974328 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f0df76-9570-4514-8a62-3986fb03976b" containerName="heat-api" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.974335 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f0df76-9570-4514-8a62-3986fb03976b" containerName="heat-api" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.974577 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d454d95a-f920-46da-9b19-70d3b987d808" containerName="mariadb-account-create-update" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.974596 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerName="proxy-httpd" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.974607 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f0df76-9570-4514-8a62-3986fb03976b" containerName="heat-api" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.974616 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerName="ceilometer-notification-agent" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.974646 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerName="ceilometer-central-agent" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.974667 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="22073987-e907-4f8b-95a3-bf9534ce1a38" containerName="sg-core" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.974681 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="77678c25-65e5-449f-b905-c8732eb45518" containerName="mariadb-database-create" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.975095 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f0df76-9570-4514-8a62-3986fb03976b" containerName="heat-api" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.977491 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.984573 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 16:39:14 crc kubenswrapper[4718]: I0123 16:39:14.984571 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.045706 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.074412 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9143624-30b1-4779-877a-eecf4a3637ed-log-httpd\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.074475 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t765s\" (UniqueName: \"kubernetes.io/projected/b9143624-30b1-4779-877a-eecf4a3637ed-kube-api-access-t765s\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.074523 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-scripts\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.074557 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-config-data\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.074641 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.074693 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9143624-30b1-4779-877a-eecf4a3637ed-run-httpd\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.074766 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.172131 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22073987-e907-4f8b-95a3-bf9534ce1a38" path="/var/lib/kubelet/pods/22073987-e907-4f8b-95a3-bf9534ce1a38/volumes" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.178605 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.178692 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9143624-30b1-4779-877a-eecf4a3637ed-log-httpd\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.178721 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t765s\" (UniqueName: \"kubernetes.io/projected/b9143624-30b1-4779-877a-eecf4a3637ed-kube-api-access-t765s\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.178778 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-scripts\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.178816 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-config-data\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.178913 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.178974 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9143624-30b1-4779-877a-eecf4a3637ed-run-httpd\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.179653 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9143624-30b1-4779-877a-eecf4a3637ed-log-httpd\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.179868 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9143624-30b1-4779-877a-eecf4a3637ed-run-httpd\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.185271 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7dc8889c5c-4lxb7" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.187394 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-scripts\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.196136 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.198476 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.202916 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-config-data\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.208446 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t765s\" (UniqueName: \"kubernetes.io/projected/b9143624-30b1-4779-877a-eecf4a3637ed-kube-api-access-t765s\") pod \"ceilometer-0\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.333203 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.542599 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-844bdc55c5-4kc9t" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.542729 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-844bdc55c5-4kc9t" event={"ID":"d1f0df76-9570-4514-8a62-3986fb03976b","Type":"ContainerDied","Data":"65894d77fc210a80d2a300974a8884e3dd56bc689e2fe94a76a68d94b734a4e1"} Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.543937 4718 scope.go:117] "RemoveContainer" containerID="cf0d6af341af21578449367007352016772bff652c51df10c7dc63b5fc246f0a" Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.618652 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-844bdc55c5-4kc9t"] Jan 23 16:39:15 crc kubenswrapper[4718]: I0123 16:39:15.641710 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-844bdc55c5-4kc9t"] Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.019731 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.170679 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.315590 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817285ad-e89c-4123-b42a-b622631062cd-combined-ca-bundle\") pod \"817285ad-e89c-4123-b42a-b622631062cd\" (UID: \"817285ad-e89c-4123-b42a-b622631062cd\") " Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.315759 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdrcw\" (UniqueName: \"kubernetes.io/projected/817285ad-e89c-4123-b42a-b622631062cd-kube-api-access-pdrcw\") pod \"817285ad-e89c-4123-b42a-b622631062cd\" (UID: \"817285ad-e89c-4123-b42a-b622631062cd\") " Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.315826 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/817285ad-e89c-4123-b42a-b622631062cd-config-data-custom\") pod \"817285ad-e89c-4123-b42a-b622631062cd\" (UID: \"817285ad-e89c-4123-b42a-b622631062cd\") " Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.315937 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817285ad-e89c-4123-b42a-b622631062cd-config-data\") pod \"817285ad-e89c-4123-b42a-b622631062cd\" (UID: \"817285ad-e89c-4123-b42a-b622631062cd\") " Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.329111 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/817285ad-e89c-4123-b42a-b622631062cd-kube-api-access-pdrcw" (OuterVolumeSpecName: "kube-api-access-pdrcw") pod "817285ad-e89c-4123-b42a-b622631062cd" (UID: "817285ad-e89c-4123-b42a-b622631062cd"). InnerVolumeSpecName "kube-api-access-pdrcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.333408 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817285ad-e89c-4123-b42a-b622631062cd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "817285ad-e89c-4123-b42a-b622631062cd" (UID: "817285ad-e89c-4123-b42a-b622631062cd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.362785 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817285ad-e89c-4123-b42a-b622631062cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "817285ad-e89c-4123-b42a-b622631062cd" (UID: "817285ad-e89c-4123-b42a-b622631062cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.414070 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817285ad-e89c-4123-b42a-b622631062cd-config-data" (OuterVolumeSpecName: "config-data") pod "817285ad-e89c-4123-b42a-b622631062cd" (UID: "817285ad-e89c-4123-b42a-b622631062cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.419276 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817285ad-e89c-4123-b42a-b622631062cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.419315 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdrcw\" (UniqueName: \"kubernetes.io/projected/817285ad-e89c-4123-b42a-b622631062cd-kube-api-access-pdrcw\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.419331 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/817285ad-e89c-4123-b42a-b622631062cd-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.419341 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817285ad-e89c-4123-b42a-b622631062cd-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.560935 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9143624-30b1-4779-877a-eecf4a3637ed","Type":"ContainerStarted","Data":"532e0b7cb59d7b89b75638a6f40932a89f4f810f241328d61562e74b32468b04"} Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.573713 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.573758 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.573755 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.573806 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6cb6c6d8c4-hwj8v" event={"ID":"817285ad-e89c-4123-b42a-b622631062cd","Type":"ContainerDied","Data":"713e38400b3d571ffa91032259d4cae018be7135490ec1afd5ef87f3231d673b"} Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.573860 4718 scope.go:117] "RemoveContainer" containerID="f3a209625560e3d6fa9cf323144e8c8276bb44c1dec7a6110f29b0633b8a3813" Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.708702 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6cb6c6d8c4-hwj8v"] Jan 23 16:39:16 crc kubenswrapper[4718]: I0123 16:39:16.719994 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6cb6c6d8c4-hwj8v"] Jan 23 16:39:17 crc kubenswrapper[4718]: E0123 16:39:17.023776 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3da9a3a9bda8d6bdebf999d5c15de01204ff4d5262558a9677077ab2a2d080b2" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 23 16:39:17 crc kubenswrapper[4718]: E0123 16:39:17.046152 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3da9a3a9bda8d6bdebf999d5c15de01204ff4d5262558a9677077ab2a2d080b2" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 23 16:39:17 crc kubenswrapper[4718]: E0123 16:39:17.049154 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3da9a3a9bda8d6bdebf999d5c15de01204ff4d5262558a9677077ab2a2d080b2" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 23 16:39:17 crc kubenswrapper[4718]: E0123 16:39:17.049198 4718 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7cd7745668-858qv" podUID="0e281000-f117-4ab6-8a56-a741d57ac660" containerName="heat-engine" Jan 23 16:39:17 crc kubenswrapper[4718]: I0123 16:39:17.101970 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="9fdab71e-08c8-4269-a9dd-69b152751e4d" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.233:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 16:39:17 crc kubenswrapper[4718]: I0123 16:39:17.178369 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="817285ad-e89c-4123-b42a-b622631062cd" path="/var/lib/kubelet/pods/817285ad-e89c-4123-b42a-b622631062cd/volumes" Jan 23 16:39:17 crc kubenswrapper[4718]: I0123 16:39:17.179288 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1f0df76-9570-4514-8a62-3986fb03976b" path="/var/lib/kubelet/pods/d1f0df76-9570-4514-8a62-3986fb03976b/volumes" Jan 23 16:39:17 crc kubenswrapper[4718]: I0123 16:39:17.619594 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9143624-30b1-4779-877a-eecf4a3637ed","Type":"ContainerStarted","Data":"a2b5a42b6dbf40546e286c1e6122a0755badfd2ec2782b907364284579854b2a"} Jan 23 16:39:18 crc kubenswrapper[4718]: I0123 16:39:18.635329 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9143624-30b1-4779-877a-eecf4a3637ed","Type":"ContainerStarted","Data":"9745689b62076beef8175efea0014b36d2d50718a17bb2b98d87b5a56df1e6dc"} Jan 23 16:39:19 crc kubenswrapper[4718]: I0123 16:39:19.664208 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9143624-30b1-4779-877a-eecf4a3637ed","Type":"ContainerStarted","Data":"52cb2f411b6e4fccf8fb98016f467c64f0043ff174ee68eb3e3bc35fede629b3"} Jan 23 16:39:19 crc kubenswrapper[4718]: I0123 16:39:19.811393 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 16:39:19 crc kubenswrapper[4718]: I0123 16:39:19.811536 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 16:39:20 crc kubenswrapper[4718]: I0123 16:39:20.568286 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 16:39:21 crc kubenswrapper[4718]: I0123 16:39:21.407745 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 16:39:21 crc kubenswrapper[4718]: I0123 16:39:21.730089 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9143624-30b1-4779-877a-eecf4a3637ed","Type":"ContainerStarted","Data":"1a830e71397cce897a17d22d0823e57c0ec6db89f4eb26e6e55ea4a525bc9774"} Jan 23 16:39:21 crc kubenswrapper[4718]: I0123 16:39:21.731718 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 16:39:21 crc kubenswrapper[4718]: I0123 16:39:21.763599 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.343607173 podStartE2EDuration="7.763573439s" podCreationTimestamp="2026-01-23 16:39:14 +0000 UTC" firstStartedPulling="2026-01-23 16:39:16.059190758 +0000 UTC m=+1357.206432749" lastFinishedPulling="2026-01-23 16:39:20.479157024 +0000 UTC m=+1361.626399015" observedRunningTime="2026-01-23 16:39:21.753418153 +0000 UTC m=+1362.900660144" watchObservedRunningTime="2026-01-23 16:39:21.763573439 +0000 UTC m=+1362.910815430" Jan 23 16:39:23 crc kubenswrapper[4718]: I0123 16:39:23.776792 4718 generic.go:334] "Generic (PLEG): container finished" podID="0e281000-f117-4ab6-8a56-a741d57ac660" containerID="3da9a3a9bda8d6bdebf999d5c15de01204ff4d5262558a9677077ab2a2d080b2" exitCode=0 Jan 23 16:39:23 crc kubenswrapper[4718]: I0123 16:39:23.776992 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7cd7745668-858qv" event={"ID":"0e281000-f117-4ab6-8a56-a741d57ac660","Type":"ContainerDied","Data":"3da9a3a9bda8d6bdebf999d5c15de01204ff4d5262558a9677077ab2a2d080b2"} Jan 23 16:39:24 crc kubenswrapper[4718]: I0123 16:39:24.345554 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:39:24 crc kubenswrapper[4718]: I0123 16:39:24.788564 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b9143624-30b1-4779-877a-eecf4a3637ed" containerName="sg-core" containerID="cri-o://52cb2f411b6e4fccf8fb98016f467c64f0043ff174ee68eb3e3bc35fede629b3" gracePeriod=30 Jan 23 16:39:24 crc kubenswrapper[4718]: I0123 16:39:24.788646 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b9143624-30b1-4779-877a-eecf4a3637ed" containerName="ceilometer-notification-agent" containerID="cri-o://9745689b62076beef8175efea0014b36d2d50718a17bb2b98d87b5a56df1e6dc" gracePeriod=30 Jan 23 16:39:24 crc kubenswrapper[4718]: I0123 16:39:24.788764 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b9143624-30b1-4779-877a-eecf4a3637ed" containerName="ceilometer-central-agent" containerID="cri-o://a2b5a42b6dbf40546e286c1e6122a0755badfd2ec2782b907364284579854b2a" gracePeriod=30 Jan 23 16:39:24 crc kubenswrapper[4718]: I0123 16:39:24.788601 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b9143624-30b1-4779-877a-eecf4a3637ed" containerName="proxy-httpd" containerID="cri-o://1a830e71397cce897a17d22d0823e57c0ec6db89f4eb26e6e55ea4a525bc9774" gracePeriod=30 Jan 23 16:39:25 crc kubenswrapper[4718]: I0123 16:39:25.819422 4718 generic.go:334] "Generic (PLEG): container finished" podID="b9143624-30b1-4779-877a-eecf4a3637ed" containerID="1a830e71397cce897a17d22d0823e57c0ec6db89f4eb26e6e55ea4a525bc9774" exitCode=0 Jan 23 16:39:25 crc kubenswrapper[4718]: I0123 16:39:25.819855 4718 generic.go:334] "Generic (PLEG): container finished" podID="b9143624-30b1-4779-877a-eecf4a3637ed" containerID="52cb2f411b6e4fccf8fb98016f467c64f0043ff174ee68eb3e3bc35fede629b3" exitCode=2 Jan 23 16:39:25 crc kubenswrapper[4718]: I0123 16:39:25.819869 4718 generic.go:334] "Generic (PLEG): container finished" podID="b9143624-30b1-4779-877a-eecf4a3637ed" containerID="9745689b62076beef8175efea0014b36d2d50718a17bb2b98d87b5a56df1e6dc" exitCode=0 Jan 23 16:39:25 crc kubenswrapper[4718]: I0123 16:39:25.819677 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9143624-30b1-4779-877a-eecf4a3637ed","Type":"ContainerDied","Data":"1a830e71397cce897a17d22d0823e57c0ec6db89f4eb26e6e55ea4a525bc9774"} Jan 23 16:39:25 crc kubenswrapper[4718]: I0123 16:39:25.819908 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9143624-30b1-4779-877a-eecf4a3637ed","Type":"ContainerDied","Data":"52cb2f411b6e4fccf8fb98016f467c64f0043ff174ee68eb3e3bc35fede629b3"} Jan 23 16:39:25 crc kubenswrapper[4718]: I0123 16:39:25.819923 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9143624-30b1-4779-877a-eecf4a3637ed","Type":"ContainerDied","Data":"9745689b62076beef8175efea0014b36d2d50718a17bb2b98d87b5a56df1e6dc"} Jan 23 16:39:26 crc kubenswrapper[4718]: E0123 16:39:26.997200 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3da9a3a9bda8d6bdebf999d5c15de01204ff4d5262558a9677077ab2a2d080b2 is running failed: container process not found" containerID="3da9a3a9bda8d6bdebf999d5c15de01204ff4d5262558a9677077ab2a2d080b2" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 23 16:39:26 crc kubenswrapper[4718]: E0123 16:39:26.998492 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3da9a3a9bda8d6bdebf999d5c15de01204ff4d5262558a9677077ab2a2d080b2 is running failed: container process not found" containerID="3da9a3a9bda8d6bdebf999d5c15de01204ff4d5262558a9677077ab2a2d080b2" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 23 16:39:26 crc kubenswrapper[4718]: E0123 16:39:26.998936 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3da9a3a9bda8d6bdebf999d5c15de01204ff4d5262558a9677077ab2a2d080b2 is running failed: container process not found" containerID="3da9a3a9bda8d6bdebf999d5c15de01204ff4d5262558a9677077ab2a2d080b2" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 23 16:39:26 crc kubenswrapper[4718]: E0123 16:39:26.998976 4718 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3da9a3a9bda8d6bdebf999d5c15de01204ff4d5262558a9677077ab2a2d080b2 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-7cd7745668-858qv" podUID="0e281000-f117-4ab6-8a56-a741d57ac660" containerName="heat-engine" Jan 23 16:39:29 crc kubenswrapper[4718]: I0123 16:39:29.763439 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7cd7745668-858qv" Jan 23 16:39:29 crc kubenswrapper[4718]: I0123 16:39:29.875042 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7cd7745668-858qv" event={"ID":"0e281000-f117-4ab6-8a56-a741d57ac660","Type":"ContainerDied","Data":"8b471805e66435a7e919165c64127f7348149974d78d515d5fee7f7117b9ca68"} Jan 23 16:39:29 crc kubenswrapper[4718]: I0123 16:39:29.875070 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7cd7745668-858qv" Jan 23 16:39:29 crc kubenswrapper[4718]: I0123 16:39:29.875101 4718 scope.go:117] "RemoveContainer" containerID="3da9a3a9bda8d6bdebf999d5c15de01204ff4d5262558a9677077ab2a2d080b2" Jan 23 16:39:29 crc kubenswrapper[4718]: I0123 16:39:29.878907 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-g6vj6" event={"ID":"506a04e5-5008-4415-9afd-6ccc208f9dd4","Type":"ContainerStarted","Data":"17782fee830311c6818e7eb8c4ef49e5eb3d1405b713d72abcd6710d06d41fb8"} Jan 23 16:39:29 crc kubenswrapper[4718]: I0123 16:39:29.904575 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-g6vj6" podStartSLOduration=2.193344381 podStartE2EDuration="20.904555056s" podCreationTimestamp="2026-01-23 16:39:09 +0000 UTC" firstStartedPulling="2026-01-23 16:39:10.651176899 +0000 UTC m=+1351.798418890" lastFinishedPulling="2026-01-23 16:39:29.362387574 +0000 UTC m=+1370.509629565" observedRunningTime="2026-01-23 16:39:29.897732742 +0000 UTC m=+1371.044974733" watchObservedRunningTime="2026-01-23 16:39:29.904555056 +0000 UTC m=+1371.051797047" Jan 23 16:39:29 crc kubenswrapper[4718]: I0123 16:39:29.920365 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e281000-f117-4ab6-8a56-a741d57ac660-combined-ca-bundle\") pod \"0e281000-f117-4ab6-8a56-a741d57ac660\" (UID: \"0e281000-f117-4ab6-8a56-a741d57ac660\") " Jan 23 16:39:29 crc kubenswrapper[4718]: I0123 16:39:29.920875 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e281000-f117-4ab6-8a56-a741d57ac660-config-data-custom\") pod \"0e281000-f117-4ab6-8a56-a741d57ac660\" (UID: \"0e281000-f117-4ab6-8a56-a741d57ac660\") " Jan 23 16:39:29 crc kubenswrapper[4718]: I0123 16:39:29.920956 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txwhp\" (UniqueName: \"kubernetes.io/projected/0e281000-f117-4ab6-8a56-a741d57ac660-kube-api-access-txwhp\") pod \"0e281000-f117-4ab6-8a56-a741d57ac660\" (UID: \"0e281000-f117-4ab6-8a56-a741d57ac660\") " Jan 23 16:39:29 crc kubenswrapper[4718]: I0123 16:39:29.921022 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e281000-f117-4ab6-8a56-a741d57ac660-config-data\") pod \"0e281000-f117-4ab6-8a56-a741d57ac660\" (UID: \"0e281000-f117-4ab6-8a56-a741d57ac660\") " Jan 23 16:39:29 crc kubenswrapper[4718]: I0123 16:39:29.927448 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e281000-f117-4ab6-8a56-a741d57ac660-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0e281000-f117-4ab6-8a56-a741d57ac660" (UID: "0e281000-f117-4ab6-8a56-a741d57ac660"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:29 crc kubenswrapper[4718]: I0123 16:39:29.927469 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e281000-f117-4ab6-8a56-a741d57ac660-kube-api-access-txwhp" (OuterVolumeSpecName: "kube-api-access-txwhp") pod "0e281000-f117-4ab6-8a56-a741d57ac660" (UID: "0e281000-f117-4ab6-8a56-a741d57ac660"). InnerVolumeSpecName "kube-api-access-txwhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:39:29 crc kubenswrapper[4718]: I0123 16:39:29.960105 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e281000-f117-4ab6-8a56-a741d57ac660-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e281000-f117-4ab6-8a56-a741d57ac660" (UID: "0e281000-f117-4ab6-8a56-a741d57ac660"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:29 crc kubenswrapper[4718]: I0123 16:39:29.982054 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e281000-f117-4ab6-8a56-a741d57ac660-config-data" (OuterVolumeSpecName: "config-data") pod "0e281000-f117-4ab6-8a56-a741d57ac660" (UID: "0e281000-f117-4ab6-8a56-a741d57ac660"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:30 crc kubenswrapper[4718]: I0123 16:39:30.025481 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e281000-f117-4ab6-8a56-a741d57ac660-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:30 crc kubenswrapper[4718]: I0123 16:39:30.025518 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e281000-f117-4ab6-8a56-a741d57ac660-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:30 crc kubenswrapper[4718]: I0123 16:39:30.025561 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txwhp\" (UniqueName: \"kubernetes.io/projected/0e281000-f117-4ab6-8a56-a741d57ac660-kube-api-access-txwhp\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:30 crc kubenswrapper[4718]: I0123 16:39:30.025571 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e281000-f117-4ab6-8a56-a741d57ac660-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:30 crc kubenswrapper[4718]: I0123 16:39:30.222764 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7cd7745668-858qv"] Jan 23 16:39:30 crc kubenswrapper[4718]: I0123 16:39:30.237482 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7cd7745668-858qv"] Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.160622 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e281000-f117-4ab6-8a56-a741d57ac660" path="/var/lib/kubelet/pods/0e281000-f117-4ab6-8a56-a741d57ac660/volumes" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.511266 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.662827 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-combined-ca-bundle\") pod \"b9143624-30b1-4779-877a-eecf4a3637ed\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.663341 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t765s\" (UniqueName: \"kubernetes.io/projected/b9143624-30b1-4779-877a-eecf4a3637ed-kube-api-access-t765s\") pod \"b9143624-30b1-4779-877a-eecf4a3637ed\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.663527 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-scripts\") pod \"b9143624-30b1-4779-877a-eecf4a3637ed\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.663670 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9143624-30b1-4779-877a-eecf4a3637ed-run-httpd\") pod \"b9143624-30b1-4779-877a-eecf4a3637ed\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.663775 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-config-data\") pod \"b9143624-30b1-4779-877a-eecf4a3637ed\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.663951 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9143624-30b1-4779-877a-eecf4a3637ed-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b9143624-30b1-4779-877a-eecf4a3637ed" (UID: "b9143624-30b1-4779-877a-eecf4a3637ed"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.664184 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9143624-30b1-4779-877a-eecf4a3637ed-log-httpd\") pod \"b9143624-30b1-4779-877a-eecf4a3637ed\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.664339 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-sg-core-conf-yaml\") pod \"b9143624-30b1-4779-877a-eecf4a3637ed\" (UID: \"b9143624-30b1-4779-877a-eecf4a3637ed\") " Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.664392 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9143624-30b1-4779-877a-eecf4a3637ed-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b9143624-30b1-4779-877a-eecf4a3637ed" (UID: "b9143624-30b1-4779-877a-eecf4a3637ed"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.665647 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9143624-30b1-4779-877a-eecf4a3637ed-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.665752 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9143624-30b1-4779-877a-eecf4a3637ed-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.669844 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9143624-30b1-4779-877a-eecf4a3637ed-kube-api-access-t765s" (OuterVolumeSpecName: "kube-api-access-t765s") pod "b9143624-30b1-4779-877a-eecf4a3637ed" (UID: "b9143624-30b1-4779-877a-eecf4a3637ed"). InnerVolumeSpecName "kube-api-access-t765s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.673791 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-scripts" (OuterVolumeSpecName: "scripts") pod "b9143624-30b1-4779-877a-eecf4a3637ed" (UID: "b9143624-30b1-4779-877a-eecf4a3637ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.707423 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b9143624-30b1-4779-877a-eecf4a3637ed" (UID: "b9143624-30b1-4779-877a-eecf4a3637ed"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.772448 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t765s\" (UniqueName: \"kubernetes.io/projected/b9143624-30b1-4779-877a-eecf4a3637ed-kube-api-access-t765s\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.772818 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.772937 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.789052 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9143624-30b1-4779-877a-eecf4a3637ed" (UID: "b9143624-30b1-4779-877a-eecf4a3637ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.813207 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-config-data" (OuterVolumeSpecName: "config-data") pod "b9143624-30b1-4779-877a-eecf4a3637ed" (UID: "b9143624-30b1-4779-877a-eecf4a3637ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.874938 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.875014 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9143624-30b1-4779-877a-eecf4a3637ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.904132 4718 generic.go:334] "Generic (PLEG): container finished" podID="b9143624-30b1-4779-877a-eecf4a3637ed" containerID="a2b5a42b6dbf40546e286c1e6122a0755badfd2ec2782b907364284579854b2a" exitCode=0 Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.904220 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9143624-30b1-4779-877a-eecf4a3637ed","Type":"ContainerDied","Data":"a2b5a42b6dbf40546e286c1e6122a0755badfd2ec2782b907364284579854b2a"} Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.904532 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9143624-30b1-4779-877a-eecf4a3637ed","Type":"ContainerDied","Data":"532e0b7cb59d7b89b75638a6f40932a89f4f810f241328d61562e74b32468b04"} Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.904222 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.904617 4718 scope.go:117] "RemoveContainer" containerID="1a830e71397cce897a17d22d0823e57c0ec6db89f4eb26e6e55ea4a525bc9774" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.932334 4718 scope.go:117] "RemoveContainer" containerID="52cb2f411b6e4fccf8fb98016f467c64f0043ff174ee68eb3e3bc35fede629b3" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.945500 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.955279 4718 scope.go:117] "RemoveContainer" containerID="9745689b62076beef8175efea0014b36d2d50718a17bb2b98d87b5a56df1e6dc" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.956964 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.972032 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:39:31 crc kubenswrapper[4718]: E0123 16:39:31.972531 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e281000-f117-4ab6-8a56-a741d57ac660" containerName="heat-engine" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.972549 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e281000-f117-4ab6-8a56-a741d57ac660" containerName="heat-engine" Jan 23 16:39:31 crc kubenswrapper[4718]: E0123 16:39:31.972562 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817285ad-e89c-4123-b42a-b622631062cd" containerName="heat-cfnapi" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.972569 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="817285ad-e89c-4123-b42a-b622631062cd" containerName="heat-cfnapi" Jan 23 16:39:31 crc kubenswrapper[4718]: E0123 16:39:31.972588 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9143624-30b1-4779-877a-eecf4a3637ed" containerName="ceilometer-central-agent" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.972594 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9143624-30b1-4779-877a-eecf4a3637ed" containerName="ceilometer-central-agent" Jan 23 16:39:31 crc kubenswrapper[4718]: E0123 16:39:31.972606 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9143624-30b1-4779-877a-eecf4a3637ed" containerName="sg-core" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.972613 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9143624-30b1-4779-877a-eecf4a3637ed" containerName="sg-core" Jan 23 16:39:31 crc kubenswrapper[4718]: E0123 16:39:31.972633 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9143624-30b1-4779-877a-eecf4a3637ed" containerName="ceilometer-notification-agent" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.972656 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9143624-30b1-4779-877a-eecf4a3637ed" containerName="ceilometer-notification-agent" Jan 23 16:39:31 crc kubenswrapper[4718]: E0123 16:39:31.972676 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817285ad-e89c-4123-b42a-b622631062cd" containerName="heat-cfnapi" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.972681 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="817285ad-e89c-4123-b42a-b622631062cd" containerName="heat-cfnapi" Jan 23 16:39:31 crc kubenswrapper[4718]: E0123 16:39:31.972697 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9143624-30b1-4779-877a-eecf4a3637ed" containerName="proxy-httpd" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.972703 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9143624-30b1-4779-877a-eecf4a3637ed" containerName="proxy-httpd" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.972892 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9143624-30b1-4779-877a-eecf4a3637ed" containerName="ceilometer-notification-agent" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.972905 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="817285ad-e89c-4123-b42a-b622631062cd" containerName="heat-cfnapi" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.972914 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9143624-30b1-4779-877a-eecf4a3637ed" containerName="proxy-httpd" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.972922 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9143624-30b1-4779-877a-eecf4a3637ed" containerName="ceilometer-central-agent" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.972943 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9143624-30b1-4779-877a-eecf4a3637ed" containerName="sg-core" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.972956 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e281000-f117-4ab6-8a56-a741d57ac660" containerName="heat-engine" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.973379 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="817285ad-e89c-4123-b42a-b622631062cd" containerName="heat-cfnapi" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.974974 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.981007 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 16:39:31 crc kubenswrapper[4718]: I0123 16:39:31.981364 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.003460 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.003875 4718 scope.go:117] "RemoveContainer" containerID="a2b5a42b6dbf40546e286c1e6122a0755badfd2ec2782b907364284579854b2a" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.064279 4718 scope.go:117] "RemoveContainer" containerID="1a830e71397cce897a17d22d0823e57c0ec6db89f4eb26e6e55ea4a525bc9774" Jan 23 16:39:32 crc kubenswrapper[4718]: E0123 16:39:32.066136 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a830e71397cce897a17d22d0823e57c0ec6db89f4eb26e6e55ea4a525bc9774\": container with ID starting with 1a830e71397cce897a17d22d0823e57c0ec6db89f4eb26e6e55ea4a525bc9774 not found: ID does not exist" containerID="1a830e71397cce897a17d22d0823e57c0ec6db89f4eb26e6e55ea4a525bc9774" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.066175 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a830e71397cce897a17d22d0823e57c0ec6db89f4eb26e6e55ea4a525bc9774"} err="failed to get container status \"1a830e71397cce897a17d22d0823e57c0ec6db89f4eb26e6e55ea4a525bc9774\": rpc error: code = NotFound desc = could not find container \"1a830e71397cce897a17d22d0823e57c0ec6db89f4eb26e6e55ea4a525bc9774\": container with ID starting with 1a830e71397cce897a17d22d0823e57c0ec6db89f4eb26e6e55ea4a525bc9774 not found: ID does not exist" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.066201 4718 scope.go:117] "RemoveContainer" containerID="52cb2f411b6e4fccf8fb98016f467c64f0043ff174ee68eb3e3bc35fede629b3" Jan 23 16:39:32 crc kubenswrapper[4718]: E0123 16:39:32.066511 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52cb2f411b6e4fccf8fb98016f467c64f0043ff174ee68eb3e3bc35fede629b3\": container with ID starting with 52cb2f411b6e4fccf8fb98016f467c64f0043ff174ee68eb3e3bc35fede629b3 not found: ID does not exist" containerID="52cb2f411b6e4fccf8fb98016f467c64f0043ff174ee68eb3e3bc35fede629b3" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.066555 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52cb2f411b6e4fccf8fb98016f467c64f0043ff174ee68eb3e3bc35fede629b3"} err="failed to get container status \"52cb2f411b6e4fccf8fb98016f467c64f0043ff174ee68eb3e3bc35fede629b3\": rpc error: code = NotFound desc = could not find container \"52cb2f411b6e4fccf8fb98016f467c64f0043ff174ee68eb3e3bc35fede629b3\": container with ID starting with 52cb2f411b6e4fccf8fb98016f467c64f0043ff174ee68eb3e3bc35fede629b3 not found: ID does not exist" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.066594 4718 scope.go:117] "RemoveContainer" containerID="9745689b62076beef8175efea0014b36d2d50718a17bb2b98d87b5a56df1e6dc" Jan 23 16:39:32 crc kubenswrapper[4718]: E0123 16:39:32.066917 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9745689b62076beef8175efea0014b36d2d50718a17bb2b98d87b5a56df1e6dc\": container with ID starting with 9745689b62076beef8175efea0014b36d2d50718a17bb2b98d87b5a56df1e6dc not found: ID does not exist" containerID="9745689b62076beef8175efea0014b36d2d50718a17bb2b98d87b5a56df1e6dc" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.066940 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9745689b62076beef8175efea0014b36d2d50718a17bb2b98d87b5a56df1e6dc"} err="failed to get container status \"9745689b62076beef8175efea0014b36d2d50718a17bb2b98d87b5a56df1e6dc\": rpc error: code = NotFound desc = could not find container \"9745689b62076beef8175efea0014b36d2d50718a17bb2b98d87b5a56df1e6dc\": container with ID starting with 9745689b62076beef8175efea0014b36d2d50718a17bb2b98d87b5a56df1e6dc not found: ID does not exist" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.066961 4718 scope.go:117] "RemoveContainer" containerID="a2b5a42b6dbf40546e286c1e6122a0755badfd2ec2782b907364284579854b2a" Jan 23 16:39:32 crc kubenswrapper[4718]: E0123 16:39:32.067192 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2b5a42b6dbf40546e286c1e6122a0755badfd2ec2782b907364284579854b2a\": container with ID starting with a2b5a42b6dbf40546e286c1e6122a0755badfd2ec2782b907364284579854b2a not found: ID does not exist" containerID="a2b5a42b6dbf40546e286c1e6122a0755badfd2ec2782b907364284579854b2a" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.067212 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2b5a42b6dbf40546e286c1e6122a0755badfd2ec2782b907364284579854b2a"} err="failed to get container status \"a2b5a42b6dbf40546e286c1e6122a0755badfd2ec2782b907364284579854b2a\": rpc error: code = NotFound desc = could not find container \"a2b5a42b6dbf40546e286c1e6122a0755badfd2ec2782b907364284579854b2a\": container with ID starting with a2b5a42b6dbf40546e286c1e6122a0755badfd2ec2782b907364284579854b2a not found: ID does not exist" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.080291 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-config-data\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.080366 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.080418 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqhxm\" (UniqueName: \"kubernetes.io/projected/8bceeb58-38a7-4691-b51c-0f2e270ff9da-kube-api-access-dqhxm\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.080441 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-scripts\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.080463 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.080486 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bceeb58-38a7-4691-b51c-0f2e270ff9da-log-httpd\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.080506 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bceeb58-38a7-4691-b51c-0f2e270ff9da-run-httpd\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.182527 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-config-data\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.182612 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.182684 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqhxm\" (UniqueName: \"kubernetes.io/projected/8bceeb58-38a7-4691-b51c-0f2e270ff9da-kube-api-access-dqhxm\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.182713 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-scripts\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.182736 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.182760 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bceeb58-38a7-4691-b51c-0f2e270ff9da-log-httpd\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.182788 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bceeb58-38a7-4691-b51c-0f2e270ff9da-run-httpd\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.183295 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bceeb58-38a7-4691-b51c-0f2e270ff9da-run-httpd\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.183851 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bceeb58-38a7-4691-b51c-0f2e270ff9da-log-httpd\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.199281 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-scripts\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.199553 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-config-data\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.201450 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.202420 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.206500 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqhxm\" (UniqueName: \"kubernetes.io/projected/8bceeb58-38a7-4691-b51c-0f2e270ff9da-kube-api-access-dqhxm\") pod \"ceilometer-0\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.307692 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:39:32 crc kubenswrapper[4718]: I0123 16:39:32.947242 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:39:33 crc kubenswrapper[4718]: I0123 16:39:33.157028 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9143624-30b1-4779-877a-eecf4a3637ed" path="/var/lib/kubelet/pods/b9143624-30b1-4779-877a-eecf4a3637ed/volumes" Jan 23 16:39:33 crc kubenswrapper[4718]: I0123 16:39:33.937685 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bceeb58-38a7-4691-b51c-0f2e270ff9da","Type":"ContainerStarted","Data":"450ff5422a40839a6f3f18d4dd824f90caac5357bb7526e8e4754a88946523af"} Jan 23 16:39:33 crc kubenswrapper[4718]: I0123 16:39:33.938114 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bceeb58-38a7-4691-b51c-0f2e270ff9da","Type":"ContainerStarted","Data":"4611d13c3f2700ed0a5124a61867df119f293c29a18defb224a1bb12d0e9deda"} Jan 23 16:39:34 crc kubenswrapper[4718]: I0123 16:39:34.951715 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bceeb58-38a7-4691-b51c-0f2e270ff9da","Type":"ContainerStarted","Data":"29f0c5ef1ef4c0f5de9834b64c51bbf35f2404be1a2959eb4ab7c1d32becdc8f"} Jan 23 16:39:35 crc kubenswrapper[4718]: I0123 16:39:35.967706 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bceeb58-38a7-4691-b51c-0f2e270ff9da","Type":"ContainerStarted","Data":"53bb6f7795211e55cbf00affd914fc1e494e06b0c2d4260b81c0f04c83f7c754"} Jan 23 16:39:37 crc kubenswrapper[4718]: I0123 16:39:37.379706 4718 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podb3f14be9-3ab8-4e54-852d-82a373d11028"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podb3f14be9-3ab8-4e54-852d-82a373d11028] : Timed out while waiting for systemd to remove kubepods-besteffort-podb3f14be9_3ab8_4e54_852d_82a373d11028.slice" Jan 23 16:39:37 crc kubenswrapper[4718]: E0123 16:39:37.380435 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podb3f14be9-3ab8-4e54-852d-82a373d11028] : unable to destroy cgroup paths for cgroup [kubepods besteffort podb3f14be9-3ab8-4e54-852d-82a373d11028] : Timed out while waiting for systemd to remove kubepods-besteffort-podb3f14be9_3ab8_4e54_852d_82a373d11028.slice" pod="openstack/glance-default-internal-api-0" podUID="b3f14be9-3ab8-4e54-852d-82a373d11028" Jan 23 16:39:37 crc kubenswrapper[4718]: I0123 16:39:37.996425 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bceeb58-38a7-4691-b51c-0f2e270ff9da","Type":"ContainerStarted","Data":"4e28d089e2a44636418d81e46873c0cc258c3b690a29ddfda43d7f3a9ec83da4"} Jan 23 16:39:37 crc kubenswrapper[4718]: I0123 16:39:37.996493 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 16:39:37 crc kubenswrapper[4718]: I0123 16:39:37.997026 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.020731 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.435945827 podStartE2EDuration="7.020701474s" podCreationTimestamp="2026-01-23 16:39:31 +0000 UTC" firstStartedPulling="2026-01-23 16:39:32.982380198 +0000 UTC m=+1374.129622189" lastFinishedPulling="2026-01-23 16:39:37.567135845 +0000 UTC m=+1378.714377836" observedRunningTime="2026-01-23 16:39:38.016548442 +0000 UTC m=+1379.163790443" watchObservedRunningTime="2026-01-23 16:39:38.020701474 +0000 UTC m=+1379.167943465" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.047758 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.062480 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.087303 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.089749 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.100051 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.100270 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.111802 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.254228 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47efd469-ac22-42d8-bb00-fd20450c9e7e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.254321 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.254901 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47efd469-ac22-42d8-bb00-fd20450c9e7e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.255240 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47efd469-ac22-42d8-bb00-fd20450c9e7e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.255748 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47efd469-ac22-42d8-bb00-fd20450c9e7e-logs\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.255931 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54k94\" (UniqueName: \"kubernetes.io/projected/47efd469-ac22-42d8-bb00-fd20450c9e7e-kube-api-access-54k94\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.256084 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47efd469-ac22-42d8-bb00-fd20450c9e7e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.256181 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47efd469-ac22-42d8-bb00-fd20450c9e7e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.358576 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47efd469-ac22-42d8-bb00-fd20450c9e7e-logs\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.358679 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54k94\" (UniqueName: \"kubernetes.io/projected/47efd469-ac22-42d8-bb00-fd20450c9e7e-kube-api-access-54k94\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.358727 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47efd469-ac22-42d8-bb00-fd20450c9e7e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.358759 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47efd469-ac22-42d8-bb00-fd20450c9e7e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.358819 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47efd469-ac22-42d8-bb00-fd20450c9e7e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.358848 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.358893 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47efd469-ac22-42d8-bb00-fd20450c9e7e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.358995 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47efd469-ac22-42d8-bb00-fd20450c9e7e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.359550 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47efd469-ac22-42d8-bb00-fd20450c9e7e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.359477 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47efd469-ac22-42d8-bb00-fd20450c9e7e-logs\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.366608 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.366683 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6fdb96493887a375a1e9d3a0dda74de9dd624a402ba78fff08bb357f5ac00041/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.368826 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47efd469-ac22-42d8-bb00-fd20450c9e7e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.371740 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47efd469-ac22-42d8-bb00-fd20450c9e7e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.372476 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47efd469-ac22-42d8-bb00-fd20450c9e7e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.374646 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47efd469-ac22-42d8-bb00-fd20450c9e7e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.386420 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54k94\" (UniqueName: \"kubernetes.io/projected/47efd469-ac22-42d8-bb00-fd20450c9e7e-kube-api-access-54k94\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.441801 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4e56f8-dd68-42ac-aa79-33982c43c8a8\") pod \"glance-default-internal-api-0\" (UID: \"47efd469-ac22-42d8-bb00-fd20450c9e7e\") " pod="openstack/glance-default-internal-api-0" Jan 23 16:39:38 crc kubenswrapper[4718]: I0123 16:39:38.719392 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 16:39:39 crc kubenswrapper[4718]: I0123 16:39:39.184960 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3f14be9-3ab8-4e54-852d-82a373d11028" path="/var/lib/kubelet/pods/b3f14be9-3ab8-4e54-852d-82a373d11028/volumes" Jan 23 16:39:39 crc kubenswrapper[4718]: I0123 16:39:39.459909 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 16:39:40 crc kubenswrapper[4718]: I0123 16:39:40.036393 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47efd469-ac22-42d8-bb00-fd20450c9e7e","Type":"ContainerStarted","Data":"fce384ae4f59fa0e39e2534ba26f4fe6d73e817ea26fcb48dafddbd0593c5b7e"} Jan 23 16:39:41 crc kubenswrapper[4718]: I0123 16:39:41.081938 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47efd469-ac22-42d8-bb00-fd20450c9e7e","Type":"ContainerStarted","Data":"345dc6cde777ba632c6de083e05cf6a47ed0d6675d1b73406c523a617fc816ca"} Jan 23 16:39:41 crc kubenswrapper[4718]: I0123 16:39:41.084515 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47efd469-ac22-42d8-bb00-fd20450c9e7e","Type":"ContainerStarted","Data":"2fe1e28e8fff84fd4a8484bb2b1c38c084c71e847fa022e6b382b3c3c4c995ad"} Jan 23 16:39:41 crc kubenswrapper[4718]: I0123 16:39:41.161053 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.161027295 podStartE2EDuration="3.161027295s" podCreationTimestamp="2026-01-23 16:39:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:39:41.121125906 +0000 UTC m=+1382.268367897" watchObservedRunningTime="2026-01-23 16:39:41.161027295 +0000 UTC m=+1382.308269276" Jan 23 16:39:41 crc kubenswrapper[4718]: I0123 16:39:41.372007 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:39:41 crc kubenswrapper[4718]: I0123 16:39:41.372365 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerName="ceilometer-central-agent" containerID="cri-o://450ff5422a40839a6f3f18d4dd824f90caac5357bb7526e8e4754a88946523af" gracePeriod=30 Jan 23 16:39:41 crc kubenswrapper[4718]: I0123 16:39:41.372531 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerName="proxy-httpd" containerID="cri-o://4e28d089e2a44636418d81e46873c0cc258c3b690a29ddfda43d7f3a9ec83da4" gracePeriod=30 Jan 23 16:39:41 crc kubenswrapper[4718]: I0123 16:39:41.372665 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerName="ceilometer-notification-agent" containerID="cri-o://29f0c5ef1ef4c0f5de9834b64c51bbf35f2404be1a2959eb4ab7c1d32becdc8f" gracePeriod=30 Jan 23 16:39:41 crc kubenswrapper[4718]: I0123 16:39:41.372918 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerName="sg-core" containerID="cri-o://53bb6f7795211e55cbf00affd914fc1e494e06b0c2d4260b81c0f04c83f7c754" gracePeriod=30 Jan 23 16:39:42 crc kubenswrapper[4718]: I0123 16:39:42.097693 4718 generic.go:334] "Generic (PLEG): container finished" podID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerID="4e28d089e2a44636418d81e46873c0cc258c3b690a29ddfda43d7f3a9ec83da4" exitCode=0 Jan 23 16:39:42 crc kubenswrapper[4718]: I0123 16:39:42.099229 4718 generic.go:334] "Generic (PLEG): container finished" podID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerID="53bb6f7795211e55cbf00affd914fc1e494e06b0c2d4260b81c0f04c83f7c754" exitCode=2 Jan 23 16:39:42 crc kubenswrapper[4718]: I0123 16:39:42.099394 4718 generic.go:334] "Generic (PLEG): container finished" podID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerID="29f0c5ef1ef4c0f5de9834b64c51bbf35f2404be1a2959eb4ab7c1d32becdc8f" exitCode=0 Jan 23 16:39:42 crc kubenswrapper[4718]: I0123 16:39:42.097777 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bceeb58-38a7-4691-b51c-0f2e270ff9da","Type":"ContainerDied","Data":"4e28d089e2a44636418d81e46873c0cc258c3b690a29ddfda43d7f3a9ec83da4"} Jan 23 16:39:42 crc kubenswrapper[4718]: I0123 16:39:42.099557 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bceeb58-38a7-4691-b51c-0f2e270ff9da","Type":"ContainerDied","Data":"53bb6f7795211e55cbf00affd914fc1e494e06b0c2d4260b81c0f04c83f7c754"} Jan 23 16:39:42 crc kubenswrapper[4718]: I0123 16:39:42.099587 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bceeb58-38a7-4691-b51c-0f2e270ff9da","Type":"ContainerDied","Data":"29f0c5ef1ef4c0f5de9834b64c51bbf35f2404be1a2959eb4ab7c1d32becdc8f"} Jan 23 16:39:43 crc kubenswrapper[4718]: I0123 16:39:43.116097 4718 generic.go:334] "Generic (PLEG): container finished" podID="506a04e5-5008-4415-9afd-6ccc208f9dd4" containerID="17782fee830311c6818e7eb8c4ef49e5eb3d1405b713d72abcd6710d06d41fb8" exitCode=0 Jan 23 16:39:43 crc kubenswrapper[4718]: I0123 16:39:43.116607 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-g6vj6" event={"ID":"506a04e5-5008-4415-9afd-6ccc208f9dd4","Type":"ContainerDied","Data":"17782fee830311c6818e7eb8c4ef49e5eb3d1405b713d72abcd6710d06d41fb8"} Jan 23 16:39:44 crc kubenswrapper[4718]: I0123 16:39:44.681356 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-g6vj6" Jan 23 16:39:44 crc kubenswrapper[4718]: I0123 16:39:44.878568 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506a04e5-5008-4415-9afd-6ccc208f9dd4-scripts\") pod \"506a04e5-5008-4415-9afd-6ccc208f9dd4\" (UID: \"506a04e5-5008-4415-9afd-6ccc208f9dd4\") " Jan 23 16:39:44 crc kubenswrapper[4718]: I0123 16:39:44.878686 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mfb8\" (UniqueName: \"kubernetes.io/projected/506a04e5-5008-4415-9afd-6ccc208f9dd4-kube-api-access-9mfb8\") pod \"506a04e5-5008-4415-9afd-6ccc208f9dd4\" (UID: \"506a04e5-5008-4415-9afd-6ccc208f9dd4\") " Jan 23 16:39:44 crc kubenswrapper[4718]: I0123 16:39:44.878753 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506a04e5-5008-4415-9afd-6ccc208f9dd4-config-data\") pod \"506a04e5-5008-4415-9afd-6ccc208f9dd4\" (UID: \"506a04e5-5008-4415-9afd-6ccc208f9dd4\") " Jan 23 16:39:44 crc kubenswrapper[4718]: I0123 16:39:44.880002 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506a04e5-5008-4415-9afd-6ccc208f9dd4-combined-ca-bundle\") pod \"506a04e5-5008-4415-9afd-6ccc208f9dd4\" (UID: \"506a04e5-5008-4415-9afd-6ccc208f9dd4\") " Jan 23 16:39:44 crc kubenswrapper[4718]: I0123 16:39:44.886423 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506a04e5-5008-4415-9afd-6ccc208f9dd4-scripts" (OuterVolumeSpecName: "scripts") pod "506a04e5-5008-4415-9afd-6ccc208f9dd4" (UID: "506a04e5-5008-4415-9afd-6ccc208f9dd4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:44 crc kubenswrapper[4718]: I0123 16:39:44.889655 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/506a04e5-5008-4415-9afd-6ccc208f9dd4-kube-api-access-9mfb8" (OuterVolumeSpecName: "kube-api-access-9mfb8") pod "506a04e5-5008-4415-9afd-6ccc208f9dd4" (UID: "506a04e5-5008-4415-9afd-6ccc208f9dd4"). InnerVolumeSpecName "kube-api-access-9mfb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:39:44 crc kubenswrapper[4718]: I0123 16:39:44.920470 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506a04e5-5008-4415-9afd-6ccc208f9dd4-config-data" (OuterVolumeSpecName: "config-data") pod "506a04e5-5008-4415-9afd-6ccc208f9dd4" (UID: "506a04e5-5008-4415-9afd-6ccc208f9dd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:44 crc kubenswrapper[4718]: I0123 16:39:44.921310 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506a04e5-5008-4415-9afd-6ccc208f9dd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "506a04e5-5008-4415-9afd-6ccc208f9dd4" (UID: "506a04e5-5008-4415-9afd-6ccc208f9dd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:44 crc kubenswrapper[4718]: I0123 16:39:44.983106 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506a04e5-5008-4415-9afd-6ccc208f9dd4-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:44 crc kubenswrapper[4718]: I0123 16:39:44.983155 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mfb8\" (UniqueName: \"kubernetes.io/projected/506a04e5-5008-4415-9afd-6ccc208f9dd4-kube-api-access-9mfb8\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:44 crc kubenswrapper[4718]: I0123 16:39:44.983174 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506a04e5-5008-4415-9afd-6ccc208f9dd4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:44 crc kubenswrapper[4718]: I0123 16:39:44.983188 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506a04e5-5008-4415-9afd-6ccc208f9dd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.154738 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-g6vj6" Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.164090 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-g6vj6" event={"ID":"506a04e5-5008-4415-9afd-6ccc208f9dd4","Type":"ContainerDied","Data":"5db5a9612b32f639b4141c93126168861ec41f10b275117733807392b13f4207"} Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.164145 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5db5a9612b32f639b4141c93126168861ec41f10b275117733807392b13f4207" Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.301731 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 16:39:45 crc kubenswrapper[4718]: E0123 16:39:45.306599 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506a04e5-5008-4415-9afd-6ccc208f9dd4" containerName="nova-cell0-conductor-db-sync" Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.306642 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="506a04e5-5008-4415-9afd-6ccc208f9dd4" containerName="nova-cell0-conductor-db-sync" Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.307018 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="506a04e5-5008-4415-9afd-6ccc208f9dd4" containerName="nova-cell0-conductor-db-sync" Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.307907 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.309690 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cr9h5" Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.320976 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.329192 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.431469 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ac70adf-8253-4b66-91b9-beb3c28648d5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0ac70adf-8253-4b66-91b9-beb3c28648d5\") " pod="openstack/nova-cell0-conductor-0" Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.431587 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9fg2\" (UniqueName: \"kubernetes.io/projected/0ac70adf-8253-4b66-91b9-beb3c28648d5-kube-api-access-n9fg2\") pod \"nova-cell0-conductor-0\" (UID: \"0ac70adf-8253-4b66-91b9-beb3c28648d5\") " pod="openstack/nova-cell0-conductor-0" Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.431748 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac70adf-8253-4b66-91b9-beb3c28648d5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0ac70adf-8253-4b66-91b9-beb3c28648d5\") " pod="openstack/nova-cell0-conductor-0" Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.535090 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ac70adf-8253-4b66-91b9-beb3c28648d5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0ac70adf-8253-4b66-91b9-beb3c28648d5\") " pod="openstack/nova-cell0-conductor-0" Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.535524 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9fg2\" (UniqueName: \"kubernetes.io/projected/0ac70adf-8253-4b66-91b9-beb3c28648d5-kube-api-access-n9fg2\") pod \"nova-cell0-conductor-0\" (UID: \"0ac70adf-8253-4b66-91b9-beb3c28648d5\") " pod="openstack/nova-cell0-conductor-0" Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.536028 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac70adf-8253-4b66-91b9-beb3c28648d5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0ac70adf-8253-4b66-91b9-beb3c28648d5\") " pod="openstack/nova-cell0-conductor-0" Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.541563 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ac70adf-8253-4b66-91b9-beb3c28648d5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0ac70adf-8253-4b66-91b9-beb3c28648d5\") " pod="openstack/nova-cell0-conductor-0" Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.550219 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac70adf-8253-4b66-91b9-beb3c28648d5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0ac70adf-8253-4b66-91b9-beb3c28648d5\") " pod="openstack/nova-cell0-conductor-0" Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.560173 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9fg2\" (UniqueName: \"kubernetes.io/projected/0ac70adf-8253-4b66-91b9-beb3c28648d5-kube-api-access-n9fg2\") pod \"nova-cell0-conductor-0\" (UID: \"0ac70adf-8253-4b66-91b9-beb3c28648d5\") " pod="openstack/nova-cell0-conductor-0" Jan 23 16:39:45 crc kubenswrapper[4718]: I0123 16:39:45.652676 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.185294 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.191421 4718 generic.go:334] "Generic (PLEG): container finished" podID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerID="450ff5422a40839a6f3f18d4dd824f90caac5357bb7526e8e4754a88946523af" exitCode=0 Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.191480 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bceeb58-38a7-4691-b51c-0f2e270ff9da","Type":"ContainerDied","Data":"450ff5422a40839a6f3f18d4dd824f90caac5357bb7526e8e4754a88946523af"} Jan 23 16:39:46 crc kubenswrapper[4718]: W0123 16:39:46.199434 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ac70adf_8253_4b66_91b9_beb3c28648d5.slice/crio-0733d969933cf56e798e00b7fcbc241585f0d6756823023577899e15610d5081 WatchSource:0}: Error finding container 0733d969933cf56e798e00b7fcbc241585f0d6756823023577899e15610d5081: Status 404 returned error can't find the container with id 0733d969933cf56e798e00b7fcbc241585f0d6756823023577899e15610d5081 Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.380766 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.458688 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bceeb58-38a7-4691-b51c-0f2e270ff9da-log-httpd\") pod \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.458761 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-combined-ca-bundle\") pod \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.458871 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-scripts\") pod \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.458946 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bceeb58-38a7-4691-b51c-0f2e270ff9da-run-httpd\") pod \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.459252 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-sg-core-conf-yaml\") pod \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.459340 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-config-data\") pod \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.459467 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bceeb58-38a7-4691-b51c-0f2e270ff9da-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8bceeb58-38a7-4691-b51c-0f2e270ff9da" (UID: "8bceeb58-38a7-4691-b51c-0f2e270ff9da"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.459737 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bceeb58-38a7-4691-b51c-0f2e270ff9da-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8bceeb58-38a7-4691-b51c-0f2e270ff9da" (UID: "8bceeb58-38a7-4691-b51c-0f2e270ff9da"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.459774 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqhxm\" (UniqueName: \"kubernetes.io/projected/8bceeb58-38a7-4691-b51c-0f2e270ff9da-kube-api-access-dqhxm\") pod \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\" (UID: \"8bceeb58-38a7-4691-b51c-0f2e270ff9da\") " Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.460489 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bceeb58-38a7-4691-b51c-0f2e270ff9da-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.460529 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bceeb58-38a7-4691-b51c-0f2e270ff9da-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.464756 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-scripts" (OuterVolumeSpecName: "scripts") pod "8bceeb58-38a7-4691-b51c-0f2e270ff9da" (UID: "8bceeb58-38a7-4691-b51c-0f2e270ff9da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.464983 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bceeb58-38a7-4691-b51c-0f2e270ff9da-kube-api-access-dqhxm" (OuterVolumeSpecName: "kube-api-access-dqhxm") pod "8bceeb58-38a7-4691-b51c-0f2e270ff9da" (UID: "8bceeb58-38a7-4691-b51c-0f2e270ff9da"). InnerVolumeSpecName "kube-api-access-dqhxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.502772 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8bceeb58-38a7-4691-b51c-0f2e270ff9da" (UID: "8bceeb58-38a7-4691-b51c-0f2e270ff9da"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.551306 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8bceeb58-38a7-4691-b51c-0f2e270ff9da" (UID: "8bceeb58-38a7-4691-b51c-0f2e270ff9da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.562926 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.562962 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqhxm\" (UniqueName: \"kubernetes.io/projected/8bceeb58-38a7-4691-b51c-0f2e270ff9da-kube-api-access-dqhxm\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.562974 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.562983 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.604494 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-config-data" (OuterVolumeSpecName: "config-data") pod "8bceeb58-38a7-4691-b51c-0f2e270ff9da" (UID: "8bceeb58-38a7-4691-b51c-0f2e270ff9da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:39:46 crc kubenswrapper[4718]: I0123 16:39:46.665561 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bceeb58-38a7-4691-b51c-0f2e270ff9da-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.209344 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bceeb58-38a7-4691-b51c-0f2e270ff9da","Type":"ContainerDied","Data":"4611d13c3f2700ed0a5124a61867df119f293c29a18defb224a1bb12d0e9deda"} Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.209918 4718 scope.go:117] "RemoveContainer" containerID="4e28d089e2a44636418d81e46873c0cc258c3b690a29ddfda43d7f3a9ec83da4" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.209716 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.213371 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0ac70adf-8253-4b66-91b9-beb3c28648d5","Type":"ContainerStarted","Data":"396107333fa5fe0851f9a21d40bb3c048f1ad0de731c751c77f25ec627d373b6"} Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.213432 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0ac70adf-8253-4b66-91b9-beb3c28648d5","Type":"ContainerStarted","Data":"0733d969933cf56e798e00b7fcbc241585f0d6756823023577899e15610d5081"} Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.214445 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.253315 4718 scope.go:117] "RemoveContainer" containerID="53bb6f7795211e55cbf00affd914fc1e494e06b0c2d4260b81c0f04c83f7c754" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.300190 4718 scope.go:117] "RemoveContainer" containerID="29f0c5ef1ef4c0f5de9834b64c51bbf35f2404be1a2959eb4ab7c1d32becdc8f" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.309745 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.3097108 podStartE2EDuration="2.3097108s" podCreationTimestamp="2026-01-23 16:39:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:39:47.288366614 +0000 UTC m=+1388.435608615" watchObservedRunningTime="2026-01-23 16:39:47.3097108 +0000 UTC m=+1388.456952791" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.332792 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.338816 4718 scope.go:117] "RemoveContainer" containerID="450ff5422a40839a6f3f18d4dd824f90caac5357bb7526e8e4754a88946523af" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.349876 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.373007 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:39:47 crc kubenswrapper[4718]: E0123 16:39:47.373544 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerName="ceilometer-central-agent" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.373563 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerName="ceilometer-central-agent" Jan 23 16:39:47 crc kubenswrapper[4718]: E0123 16:39:47.373592 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerName="ceilometer-notification-agent" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.373599 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerName="ceilometer-notification-agent" Jan 23 16:39:47 crc kubenswrapper[4718]: E0123 16:39:47.373611 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerName="sg-core" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.373617 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerName="sg-core" Jan 23 16:39:47 crc kubenswrapper[4718]: E0123 16:39:47.373642 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerName="proxy-httpd" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.373647 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerName="proxy-httpd" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.373996 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerName="sg-core" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.374017 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerName="proxy-httpd" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.374033 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerName="ceilometer-central-agent" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.374045 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" containerName="ceilometer-notification-agent" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.376049 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.383018 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.417326 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.442144 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.541195 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8vjn\" (UniqueName: \"kubernetes.io/projected/d762625c-0c19-4652-a92f-a59698a086e3-kube-api-access-m8vjn\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.541289 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d762625c-0c19-4652-a92f-a59698a086e3-run-httpd\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.541337 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.541411 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-config-data\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.541463 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-scripts\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.541544 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d762625c-0c19-4652-a92f-a59698a086e3-log-httpd\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.541609 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.643330 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.643452 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-config-data\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.643504 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-scripts\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.643565 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d762625c-0c19-4652-a92f-a59698a086e3-log-httpd\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.643612 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.643663 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8vjn\" (UniqueName: \"kubernetes.io/projected/d762625c-0c19-4652-a92f-a59698a086e3-kube-api-access-m8vjn\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.643704 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d762625c-0c19-4652-a92f-a59698a086e3-run-httpd\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.644149 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d762625c-0c19-4652-a92f-a59698a086e3-run-httpd\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.645025 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d762625c-0c19-4652-a92f-a59698a086e3-log-httpd\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.649690 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-scripts\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.650296 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.664391 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.665083 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-config-data\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.669829 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8vjn\" (UniqueName: \"kubernetes.io/projected/d762625c-0c19-4652-a92f-a59698a086e3-kube-api-access-m8vjn\") pod \"ceilometer-0\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " pod="openstack/ceilometer-0" Jan 23 16:39:47 crc kubenswrapper[4718]: I0123 16:39:47.738320 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:39:48 crc kubenswrapper[4718]: I0123 16:39:48.281313 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:39:48 crc kubenswrapper[4718]: I0123 16:39:48.721262 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 16:39:48 crc kubenswrapper[4718]: I0123 16:39:48.721314 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 16:39:48 crc kubenswrapper[4718]: I0123 16:39:48.781044 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 16:39:48 crc kubenswrapper[4718]: I0123 16:39:48.783516 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 16:39:49 crc kubenswrapper[4718]: I0123 16:39:49.193513 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bceeb58-38a7-4691-b51c-0f2e270ff9da" path="/var/lib/kubelet/pods/8bceeb58-38a7-4691-b51c-0f2e270ff9da/volumes" Jan 23 16:39:49 crc kubenswrapper[4718]: I0123 16:39:49.256052 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d762625c-0c19-4652-a92f-a59698a086e3","Type":"ContainerStarted","Data":"c532819e7be6234cf987aa349f7a64368fa6221ad0848139458560c3350dd1ea"} Jan 23 16:39:49 crc kubenswrapper[4718]: I0123 16:39:49.256313 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d762625c-0c19-4652-a92f-a59698a086e3","Type":"ContainerStarted","Data":"d1358c6ffb4a16c64c7b149cd62fc37a659470d2f560c9271357fbe205994f5f"} Jan 23 16:39:49 crc kubenswrapper[4718]: I0123 16:39:49.256380 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 16:39:49 crc kubenswrapper[4718]: I0123 16:39:49.256551 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 16:39:50 crc kubenswrapper[4718]: I0123 16:39:50.269777 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d762625c-0c19-4652-a92f-a59698a086e3","Type":"ContainerStarted","Data":"25aa140d1c481c4e322e2dcec5a4909a3499d36780ef2a745e4cd3d276cbcca5"} Jan 23 16:39:51 crc kubenswrapper[4718]: I0123 16:39:51.297179 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d762625c-0c19-4652-a92f-a59698a086e3","Type":"ContainerStarted","Data":"27d728899aca49c541bd6e8e2114f2abd12fc1ca65f4e63fe799ea6015b07cbb"} Jan 23 16:39:51 crc kubenswrapper[4718]: I0123 16:39:51.297227 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 16:39:51 crc kubenswrapper[4718]: I0123 16:39:51.297812 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 16:39:51 crc kubenswrapper[4718]: I0123 16:39:51.428399 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 16:39:52 crc kubenswrapper[4718]: I0123 16:39:52.312911 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 16:39:52 crc kubenswrapper[4718]: I0123 16:39:52.312918 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d762625c-0c19-4652-a92f-a59698a086e3","Type":"ContainerStarted","Data":"f33196fd1ba87185c80339a0783cb42522dce7e7f418ffe1fe6b8016718881b5"} Jan 23 16:39:52 crc kubenswrapper[4718]: I0123 16:39:52.314120 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 16:39:52 crc kubenswrapper[4718]: I0123 16:39:52.344067 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.120448277 podStartE2EDuration="5.343642148s" podCreationTimestamp="2026-01-23 16:39:47 +0000 UTC" firstStartedPulling="2026-01-23 16:39:48.282092299 +0000 UTC m=+1389.429334290" lastFinishedPulling="2026-01-23 16:39:51.50528618 +0000 UTC m=+1392.652528161" observedRunningTime="2026-01-23 16:39:52.332965049 +0000 UTC m=+1393.480207060" watchObservedRunningTime="2026-01-23 16:39:52.343642148 +0000 UTC m=+1393.490884139" Jan 23 16:39:52 crc kubenswrapper[4718]: I0123 16:39:52.523756 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 16:39:55 crc kubenswrapper[4718]: I0123 16:39:55.686297 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.411936 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-c7phc"] Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.422109 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c7phc" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.433713 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.437442 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.511221 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-c7phc"] Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.536571 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8db3c02-bc86-4e35-8cce-3fba179bfe88-scripts\") pod \"nova-cell0-cell-mapping-c7phc\" (UID: \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\") " pod="openstack/nova-cell0-cell-mapping-c7phc" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.536750 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8db3c02-bc86-4e35-8cce-3fba179bfe88-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c7phc\" (UID: \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\") " pod="openstack/nova-cell0-cell-mapping-c7phc" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.536800 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7gkr\" (UniqueName: \"kubernetes.io/projected/f8db3c02-bc86-4e35-8cce-3fba179bfe88-kube-api-access-f7gkr\") pod \"nova-cell0-cell-mapping-c7phc\" (UID: \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\") " pod="openstack/nova-cell0-cell-mapping-c7phc" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.537026 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8db3c02-bc86-4e35-8cce-3fba179bfe88-config-data\") pod \"nova-cell0-cell-mapping-c7phc\" (UID: \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\") " pod="openstack/nova-cell0-cell-mapping-c7phc" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.601291 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.603353 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.608133 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.633095 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.649537 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f87301-48b3-44b2-aa3e-42bd55c78768-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d4f87301-48b3-44b2-aa3e-42bd55c78768\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.649727 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8db3c02-bc86-4e35-8cce-3fba179bfe88-scripts\") pod \"nova-cell0-cell-mapping-c7phc\" (UID: \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\") " pod="openstack/nova-cell0-cell-mapping-c7phc" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.649777 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8db3c02-bc86-4e35-8cce-3fba179bfe88-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c7phc\" (UID: \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\") " pod="openstack/nova-cell0-cell-mapping-c7phc" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.649805 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7gkr\" (UniqueName: \"kubernetes.io/projected/f8db3c02-bc86-4e35-8cce-3fba179bfe88-kube-api-access-f7gkr\") pod \"nova-cell0-cell-mapping-c7phc\" (UID: \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\") " pod="openstack/nova-cell0-cell-mapping-c7phc" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.649858 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4f87301-48b3-44b2-aa3e-42bd55c78768-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d4f87301-48b3-44b2-aa3e-42bd55c78768\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.649913 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzqns\" (UniqueName: \"kubernetes.io/projected/d4f87301-48b3-44b2-aa3e-42bd55c78768-kube-api-access-vzqns\") pod \"nova-cell1-novncproxy-0\" (UID: \"d4f87301-48b3-44b2-aa3e-42bd55c78768\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.649930 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8db3c02-bc86-4e35-8cce-3fba179bfe88-config-data\") pod \"nova-cell0-cell-mapping-c7phc\" (UID: \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\") " pod="openstack/nova-cell0-cell-mapping-c7phc" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.660874 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8db3c02-bc86-4e35-8cce-3fba179bfe88-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c7phc\" (UID: \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\") " pod="openstack/nova-cell0-cell-mapping-c7phc" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.662830 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8db3c02-bc86-4e35-8cce-3fba179bfe88-config-data\") pod \"nova-cell0-cell-mapping-c7phc\" (UID: \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\") " pod="openstack/nova-cell0-cell-mapping-c7phc" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.680737 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.691241 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7gkr\" (UniqueName: \"kubernetes.io/projected/f8db3c02-bc86-4e35-8cce-3fba179bfe88-kube-api-access-f7gkr\") pod \"nova-cell0-cell-mapping-c7phc\" (UID: \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\") " pod="openstack/nova-cell0-cell-mapping-c7phc" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.694807 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8db3c02-bc86-4e35-8cce-3fba179bfe88-scripts\") pod \"nova-cell0-cell-mapping-c7phc\" (UID: \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\") " pod="openstack/nova-cell0-cell-mapping-c7phc" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.695557 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.702178 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.717515 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.754117 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7196bcab-6196-4625-b162-3ed1c69bc1ad-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7196bcab-6196-4625-b162-3ed1c69bc1ad\") " pod="openstack/nova-scheduler-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.754595 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5kpr\" (UniqueName: \"kubernetes.io/projected/7196bcab-6196-4625-b162-3ed1c69bc1ad-kube-api-access-d5kpr\") pod \"nova-scheduler-0\" (UID: \"7196bcab-6196-4625-b162-3ed1c69bc1ad\") " pod="openstack/nova-scheduler-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.754722 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4f87301-48b3-44b2-aa3e-42bd55c78768-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d4f87301-48b3-44b2-aa3e-42bd55c78768\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.754811 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzqns\" (UniqueName: \"kubernetes.io/projected/d4f87301-48b3-44b2-aa3e-42bd55c78768-kube-api-access-vzqns\") pod \"nova-cell1-novncproxy-0\" (UID: \"d4f87301-48b3-44b2-aa3e-42bd55c78768\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.754893 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f87301-48b3-44b2-aa3e-42bd55c78768-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d4f87301-48b3-44b2-aa3e-42bd55c78768\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.755043 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7196bcab-6196-4625-b162-3ed1c69bc1ad-config-data\") pod \"nova-scheduler-0\" (UID: \"7196bcab-6196-4625-b162-3ed1c69bc1ad\") " pod="openstack/nova-scheduler-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.770272 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4f87301-48b3-44b2-aa3e-42bd55c78768-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d4f87301-48b3-44b2-aa3e-42bd55c78768\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.771429 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c7phc" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.789746 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.791801 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.807566 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.809793 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f87301-48b3-44b2-aa3e-42bd55c78768-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d4f87301-48b3-44b2-aa3e-42bd55c78768\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.844509 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzqns\" (UniqueName: \"kubernetes.io/projected/d4f87301-48b3-44b2-aa3e-42bd55c78768-kube-api-access-vzqns\") pod \"nova-cell1-novncproxy-0\" (UID: \"d4f87301-48b3-44b2-aa3e-42bd55c78768\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.848813 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.861604 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7196bcab-6196-4625-b162-3ed1c69bc1ad-config-data\") pod \"nova-scheduler-0\" (UID: \"7196bcab-6196-4625-b162-3ed1c69bc1ad\") " pod="openstack/nova-scheduler-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.861689 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7196bcab-6196-4625-b162-3ed1c69bc1ad-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7196bcab-6196-4625-b162-3ed1c69bc1ad\") " pod="openstack/nova-scheduler-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.861743 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5kpr\" (UniqueName: \"kubernetes.io/projected/7196bcab-6196-4625-b162-3ed1c69bc1ad-kube-api-access-d5kpr\") pod \"nova-scheduler-0\" (UID: \"7196bcab-6196-4625-b162-3ed1c69bc1ad\") " pod="openstack/nova-scheduler-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.873472 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7196bcab-6196-4625-b162-3ed1c69bc1ad-config-data\") pod \"nova-scheduler-0\" (UID: \"7196bcab-6196-4625-b162-3ed1c69bc1ad\") " pod="openstack/nova-scheduler-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.873989 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7196bcab-6196-4625-b162-3ed1c69bc1ad-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7196bcab-6196-4625-b162-3ed1c69bc1ad\") " pod="openstack/nova-scheduler-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.910924 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5kpr\" (UniqueName: \"kubernetes.io/projected/7196bcab-6196-4625-b162-3ed1c69bc1ad-kube-api-access-d5kpr\") pod \"nova-scheduler-0\" (UID: \"7196bcab-6196-4625-b162-3ed1c69bc1ad\") " pod="openstack/nova-scheduler-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.930889 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.947814 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.966506 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwcgf\" (UniqueName: \"kubernetes.io/projected/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-kube-api-access-rwcgf\") pod \"nova-api-0\" (UID: \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\") " pod="openstack/nova-api-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.966563 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\") " pod="openstack/nova-api-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.966605 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-logs\") pod \"nova-api-0\" (UID: \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\") " pod="openstack/nova-api-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.966734 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-config-data\") pod \"nova-api-0\" (UID: \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\") " pod="openstack/nova-api-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.969514 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.973419 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 16:39:56 crc kubenswrapper[4718]: I0123 16:39:56.975498 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.009113 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.051099 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-5fb9b"] Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.054530 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.073280 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08217c1d-bb06-4978-962e-541df7337fcf-config-data\") pod \"nova-metadata-0\" (UID: \"08217c1d-bb06-4978-962e-541df7337fcf\") " pod="openstack/nova-metadata-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.073384 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08217c1d-bb06-4978-962e-541df7337fcf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"08217c1d-bb06-4978-962e-541df7337fcf\") " pod="openstack/nova-metadata-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.073416 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-config-data\") pod \"nova-api-0\" (UID: \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\") " pod="openstack/nova-api-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.073470 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-5fb9b\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.073508 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-5fb9b\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.073541 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-config\") pod \"dnsmasq-dns-568d7fd7cf-5fb9b\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.073562 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkr62\" (UniqueName: \"kubernetes.io/projected/08217c1d-bb06-4978-962e-541df7337fcf-kube-api-access-dkr62\") pod \"nova-metadata-0\" (UID: \"08217c1d-bb06-4978-962e-541df7337fcf\") " pod="openstack/nova-metadata-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.073610 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnxh2\" (UniqueName: \"kubernetes.io/projected/942e0bee-95f9-4e8d-8157-8353e9b80b41-kube-api-access-gnxh2\") pod \"dnsmasq-dns-568d7fd7cf-5fb9b\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.073652 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwcgf\" (UniqueName: \"kubernetes.io/projected/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-kube-api-access-rwcgf\") pod \"nova-api-0\" (UID: \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\") " pod="openstack/nova-api-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.073679 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\") " pod="openstack/nova-api-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.073699 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-5fb9b\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.073716 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-5fb9b\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.073748 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-logs\") pod \"nova-api-0\" (UID: \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\") " pod="openstack/nova-api-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.073768 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08217c1d-bb06-4978-962e-541df7337fcf-logs\") pod \"nova-metadata-0\" (UID: \"08217c1d-bb06-4978-962e-541df7337fcf\") " pod="openstack/nova-metadata-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.078272 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-logs\") pod \"nova-api-0\" (UID: \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\") " pod="openstack/nova-api-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.085389 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\") " pod="openstack/nova-api-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.105147 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-config-data\") pod \"nova-api-0\" (UID: \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\") " pod="openstack/nova-api-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.109489 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwcgf\" (UniqueName: \"kubernetes.io/projected/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-kube-api-access-rwcgf\") pod \"nova-api-0\" (UID: \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\") " pod="openstack/nova-api-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.120371 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-5fb9b"] Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.193766 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08217c1d-bb06-4978-962e-541df7337fcf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"08217c1d-bb06-4978-962e-541df7337fcf\") " pod="openstack/nova-metadata-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.193851 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-5fb9b\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.193898 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-5fb9b\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.193934 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-config\") pod \"dnsmasq-dns-568d7fd7cf-5fb9b\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.193977 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkr62\" (UniqueName: \"kubernetes.io/projected/08217c1d-bb06-4978-962e-541df7337fcf-kube-api-access-dkr62\") pod \"nova-metadata-0\" (UID: \"08217c1d-bb06-4978-962e-541df7337fcf\") " pod="openstack/nova-metadata-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.194063 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnxh2\" (UniqueName: \"kubernetes.io/projected/942e0bee-95f9-4e8d-8157-8353e9b80b41-kube-api-access-gnxh2\") pod \"dnsmasq-dns-568d7fd7cf-5fb9b\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.194286 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-5fb9b\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.194307 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-5fb9b\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.194341 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08217c1d-bb06-4978-962e-541df7337fcf-logs\") pod \"nova-metadata-0\" (UID: \"08217c1d-bb06-4978-962e-541df7337fcf\") " pod="openstack/nova-metadata-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.194380 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08217c1d-bb06-4978-962e-541df7337fcf-config-data\") pod \"nova-metadata-0\" (UID: \"08217c1d-bb06-4978-962e-541df7337fcf\") " pod="openstack/nova-metadata-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.196030 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-config\") pod \"dnsmasq-dns-568d7fd7cf-5fb9b\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.196190 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-5fb9b\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.196522 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-5fb9b\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.197024 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-5fb9b\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.197856 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-5fb9b\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.197873 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08217c1d-bb06-4978-962e-541df7337fcf-logs\") pod \"nova-metadata-0\" (UID: \"08217c1d-bb06-4978-962e-541df7337fcf\") " pod="openstack/nova-metadata-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.207268 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08217c1d-bb06-4978-962e-541df7337fcf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"08217c1d-bb06-4978-962e-541df7337fcf\") " pod="openstack/nova-metadata-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.208494 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08217c1d-bb06-4978-962e-541df7337fcf-config-data\") pod \"nova-metadata-0\" (UID: \"08217c1d-bb06-4978-962e-541df7337fcf\") " pod="openstack/nova-metadata-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.223753 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnxh2\" (UniqueName: \"kubernetes.io/projected/942e0bee-95f9-4e8d-8157-8353e9b80b41-kube-api-access-gnxh2\") pod \"dnsmasq-dns-568d7fd7cf-5fb9b\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.224170 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkr62\" (UniqueName: \"kubernetes.io/projected/08217c1d-bb06-4978-962e-541df7337fcf-kube-api-access-dkr62\") pod \"nova-metadata-0\" (UID: \"08217c1d-bb06-4978-962e-541df7337fcf\") " pod="openstack/nova-metadata-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.341104 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.362362 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.429325 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.711430 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-c7phc"] Jan 23 16:39:57 crc kubenswrapper[4718]: I0123 16:39:57.975075 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 16:39:58 crc kubenswrapper[4718]: I0123 16:39:58.375560 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 16:39:58 crc kubenswrapper[4718]: I0123 16:39:58.484670 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 16:39:58 crc kubenswrapper[4718]: I0123 16:39:58.492980 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7196bcab-6196-4625-b162-3ed1c69bc1ad","Type":"ContainerStarted","Data":"ae0ae3c19ed5e3ffbeb691197b02eda7f9c37e34b4270153ebc85686a92eeb50"} Jan 23 16:39:58 crc kubenswrapper[4718]: I0123 16:39:58.494670 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d4f87301-48b3-44b2-aa3e-42bd55c78768","Type":"ContainerStarted","Data":"af2abe38de11c3cf0ae2339b07209490156e5a05ba462e4b80a7e37c9490d441"} Jan 23 16:39:58 crc kubenswrapper[4718]: I0123 16:39:58.496362 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08217c1d-bb06-4978-962e-541df7337fcf","Type":"ContainerStarted","Data":"4d5444d13b6cfadf069f850373d383a48930d8baa7f7ebbcd43c9a9706de2af5"} Jan 23 16:39:58 crc kubenswrapper[4718]: I0123 16:39:58.498265 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c7phc" event={"ID":"f8db3c02-bc86-4e35-8cce-3fba179bfe88","Type":"ContainerStarted","Data":"5fe5268e1fc895fd279ed08c0f2fad6d958c47574bda524ef22f26b709867bae"} Jan 23 16:39:58 crc kubenswrapper[4718]: I0123 16:39:58.498325 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c7phc" event={"ID":"f8db3c02-bc86-4e35-8cce-3fba179bfe88","Type":"ContainerStarted","Data":"9dfb003ad524cadd48ae0974451a75ff74738742760dcd356e1d33370a263b0d"} Jan 23 16:39:58 crc kubenswrapper[4718]: I0123 16:39:58.543590 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-c7phc" podStartSLOduration=2.543563817 podStartE2EDuration="2.543563817s" podCreationTimestamp="2026-01-23 16:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:39:58.525084599 +0000 UTC m=+1399.672326590" watchObservedRunningTime="2026-01-23 16:39:58.543563817 +0000 UTC m=+1399.690805808" Jan 23 16:39:58 crc kubenswrapper[4718]: I0123 16:39:58.669665 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-5fb9b"] Jan 23 16:39:58 crc kubenswrapper[4718]: I0123 16:39:58.680645 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.481931 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-98zcf"] Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.484842 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-98zcf" Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.493001 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.493473 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.509836 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-98zcf\" (UID: \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\") " pod="openstack/nova-cell1-conductor-db-sync-98zcf" Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.509951 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-config-data\") pod \"nova-cell1-conductor-db-sync-98zcf\" (UID: \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\") " pod="openstack/nova-cell1-conductor-db-sync-98zcf" Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.509991 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7749\" (UniqueName: \"kubernetes.io/projected/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-kube-api-access-g7749\") pod \"nova-cell1-conductor-db-sync-98zcf\" (UID: \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\") " pod="openstack/nova-cell1-conductor-db-sync-98zcf" Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.510152 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-scripts\") pod \"nova-cell1-conductor-db-sync-98zcf\" (UID: \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\") " pod="openstack/nova-cell1-conductor-db-sync-98zcf" Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.533128 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8","Type":"ContainerStarted","Data":"3d19bd54be9db420397abeba057a20c1820a72acc3d013d0495ae717c55a1278"} Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.540121 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-98zcf"] Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.572293 4718 generic.go:334] "Generic (PLEG): container finished" podID="942e0bee-95f9-4e8d-8157-8353e9b80b41" containerID="1698d98ae401ad23d007232fc7d0b7c837a29204e3d91afa2bbd491010ebb3d2" exitCode=0 Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.573659 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" event={"ID":"942e0bee-95f9-4e8d-8157-8353e9b80b41","Type":"ContainerDied","Data":"1698d98ae401ad23d007232fc7d0b7c837a29204e3d91afa2bbd491010ebb3d2"} Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.573694 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" event={"ID":"942e0bee-95f9-4e8d-8157-8353e9b80b41","Type":"ContainerStarted","Data":"0c11dcff7bd9af1037f756d93604a20e1a78c62964dedd67cc63e328b00e5b18"} Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.611649 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-scripts\") pod \"nova-cell1-conductor-db-sync-98zcf\" (UID: \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\") " pod="openstack/nova-cell1-conductor-db-sync-98zcf" Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.612292 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-98zcf\" (UID: \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\") " pod="openstack/nova-cell1-conductor-db-sync-98zcf" Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.612359 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-config-data\") pod \"nova-cell1-conductor-db-sync-98zcf\" (UID: \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\") " pod="openstack/nova-cell1-conductor-db-sync-98zcf" Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.612397 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7749\" (UniqueName: \"kubernetes.io/projected/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-kube-api-access-g7749\") pod \"nova-cell1-conductor-db-sync-98zcf\" (UID: \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\") " pod="openstack/nova-cell1-conductor-db-sync-98zcf" Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.620898 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-98zcf\" (UID: \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\") " pod="openstack/nova-cell1-conductor-db-sync-98zcf" Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.622472 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-scripts\") pod \"nova-cell1-conductor-db-sync-98zcf\" (UID: \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\") " pod="openstack/nova-cell1-conductor-db-sync-98zcf" Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.627910 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-config-data\") pod \"nova-cell1-conductor-db-sync-98zcf\" (UID: \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\") " pod="openstack/nova-cell1-conductor-db-sync-98zcf" Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.641818 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7749\" (UniqueName: \"kubernetes.io/projected/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-kube-api-access-g7749\") pod \"nova-cell1-conductor-db-sync-98zcf\" (UID: \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\") " pod="openstack/nova-cell1-conductor-db-sync-98zcf" Jan 23 16:39:59 crc kubenswrapper[4718]: I0123 16:39:59.834553 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-98zcf" Jan 23 16:40:00 crc kubenswrapper[4718]: I0123 16:40:00.608349 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" event={"ID":"942e0bee-95f9-4e8d-8157-8353e9b80b41","Type":"ContainerStarted","Data":"2f6f27753e47c23eb8d29fccc51d61064100eb7a9fbbf9f0928c0f6c98422cba"} Jan 23 16:40:00 crc kubenswrapper[4718]: I0123 16:40:00.610000 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:40:00 crc kubenswrapper[4718]: I0123 16:40:00.675084 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" podStartSLOduration=4.675060064 podStartE2EDuration="4.675060064s" podCreationTimestamp="2026-01-23 16:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:40:00.628155386 +0000 UTC m=+1401.775397377" watchObservedRunningTime="2026-01-23 16:40:00.675060064 +0000 UTC m=+1401.822302115" Jan 23 16:40:00 crc kubenswrapper[4718]: I0123 16:40:00.743714 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-98zcf"] Jan 23 16:40:01 crc kubenswrapper[4718]: I0123 16:40:01.209984 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 16:40:01 crc kubenswrapper[4718]: I0123 16:40:01.224499 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 16:40:01 crc kubenswrapper[4718]: I0123 16:40:01.643338 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-98zcf" event={"ID":"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf","Type":"ContainerStarted","Data":"515c04a29fef45d29dd5ebf358fb085926754f858e699d3ad6c424457ab53fc6"} Jan 23 16:40:01 crc kubenswrapper[4718]: I0123 16:40:01.643936 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-98zcf" event={"ID":"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf","Type":"ContainerStarted","Data":"c6568ad640ed92599e80c7449738d03536478de72663cbe91c18be66aedb707b"} Jan 23 16:40:01 crc kubenswrapper[4718]: I0123 16:40:01.669543 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-98zcf" podStartSLOduration=2.66951384 podStartE2EDuration="2.66951384s" podCreationTimestamp="2026-01-23 16:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:40:01.667411973 +0000 UTC m=+1402.814653984" watchObservedRunningTime="2026-01-23 16:40:01.66951384 +0000 UTC m=+1402.816755831" Jan 23 16:40:03 crc kubenswrapper[4718]: I0123 16:40:03.070581 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:40:03 crc kubenswrapper[4718]: I0123 16:40:03.071572 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d762625c-0c19-4652-a92f-a59698a086e3" containerName="ceilometer-central-agent" containerID="cri-o://c532819e7be6234cf987aa349f7a64368fa6221ad0848139458560c3350dd1ea" gracePeriod=30 Jan 23 16:40:03 crc kubenswrapper[4718]: I0123 16:40:03.071821 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d762625c-0c19-4652-a92f-a59698a086e3" containerName="proxy-httpd" containerID="cri-o://f33196fd1ba87185c80339a0783cb42522dce7e7f418ffe1fe6b8016718881b5" gracePeriod=30 Jan 23 16:40:03 crc kubenswrapper[4718]: I0123 16:40:03.071864 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d762625c-0c19-4652-a92f-a59698a086e3" containerName="sg-core" containerID="cri-o://27d728899aca49c541bd6e8e2114f2abd12fc1ca65f4e63fe799ea6015b07cbb" gracePeriod=30 Jan 23 16:40:03 crc kubenswrapper[4718]: I0123 16:40:03.071893 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d762625c-0c19-4652-a92f-a59698a086e3" containerName="ceilometer-notification-agent" containerID="cri-o://25aa140d1c481c4e322e2dcec5a4909a3499d36780ef2a745e4cd3d276cbcca5" gracePeriod=30 Jan 23 16:40:03 crc kubenswrapper[4718]: I0123 16:40:03.087733 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d762625c-0c19-4652-a92f-a59698a086e3" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.246:3000/\": EOF" Jan 23 16:40:03 crc kubenswrapper[4718]: I0123 16:40:03.690149 4718 generic.go:334] "Generic (PLEG): container finished" podID="d762625c-0c19-4652-a92f-a59698a086e3" containerID="f33196fd1ba87185c80339a0783cb42522dce7e7f418ffe1fe6b8016718881b5" exitCode=0 Jan 23 16:40:03 crc kubenswrapper[4718]: I0123 16:40:03.690935 4718 generic.go:334] "Generic (PLEG): container finished" podID="d762625c-0c19-4652-a92f-a59698a086e3" containerID="27d728899aca49c541bd6e8e2114f2abd12fc1ca65f4e63fe799ea6015b07cbb" exitCode=2 Jan 23 16:40:03 crc kubenswrapper[4718]: I0123 16:40:03.691026 4718 generic.go:334] "Generic (PLEG): container finished" podID="d762625c-0c19-4652-a92f-a59698a086e3" containerID="c532819e7be6234cf987aa349f7a64368fa6221ad0848139458560c3350dd1ea" exitCode=0 Jan 23 16:40:03 crc kubenswrapper[4718]: I0123 16:40:03.690316 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d762625c-0c19-4652-a92f-a59698a086e3","Type":"ContainerDied","Data":"f33196fd1ba87185c80339a0783cb42522dce7e7f418ffe1fe6b8016718881b5"} Jan 23 16:40:03 crc kubenswrapper[4718]: I0123 16:40:03.691239 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d762625c-0c19-4652-a92f-a59698a086e3","Type":"ContainerDied","Data":"27d728899aca49c541bd6e8e2114f2abd12fc1ca65f4e63fe799ea6015b07cbb"} Jan 23 16:40:03 crc kubenswrapper[4718]: I0123 16:40:03.691316 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d762625c-0c19-4652-a92f-a59698a086e3","Type":"ContainerDied","Data":"c532819e7be6234cf987aa349f7a64368fa6221ad0848139458560c3350dd1ea"} Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.709822 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-lm8pg"] Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.711967 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-lm8pg" Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.726318 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d4f87301-48b3-44b2-aa3e-42bd55c78768","Type":"ContainerStarted","Data":"7a6c8ed996e0448731261f26a234a1439c5bb58b523527040dad7475e81affac"} Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.726497 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d4f87301-48b3-44b2-aa3e-42bd55c78768" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://7a6c8ed996e0448731261f26a234a1439c5bb58b523527040dad7475e81affac" gracePeriod=30 Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.738712 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-lm8pg"] Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.739519 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8","Type":"ContainerStarted","Data":"8233f3106f9961c829e4a42d9ad2ed32959a4d702b0fc464b3293797c22ba5a8"} Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.739560 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8","Type":"ContainerStarted","Data":"721e7c3896883ba1f671c64b7ca0e619f95742a3f7b640494a72c9bfcd355630"} Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.773732 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08217c1d-bb06-4978-962e-541df7337fcf","Type":"ContainerStarted","Data":"8f0ed0bd254e1a0bdc0667bd5acf472ad21391abe81b5cf68ea2c418e90cf784"} Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.773785 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08217c1d-bb06-4978-962e-541df7337fcf","Type":"ContainerStarted","Data":"23b87573a18d1514f7aaf45c6c84e18262b5935b8e3a7cc970316c710f519143"} Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.773957 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="08217c1d-bb06-4978-962e-541df7337fcf" containerName="nova-metadata-log" containerID="cri-o://23b87573a18d1514f7aaf45c6c84e18262b5935b8e3a7cc970316c710f519143" gracePeriod=30 Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.774280 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="08217c1d-bb06-4978-962e-541df7337fcf" containerName="nova-metadata-metadata" containerID="cri-o://8f0ed0bd254e1a0bdc0667bd5acf472ad21391abe81b5cf68ea2c418e90cf784" gracePeriod=30 Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.794111 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7196bcab-6196-4625-b162-3ed1c69bc1ad","Type":"ContainerStarted","Data":"0906ce8ddda5c6f166243749547ffa84bf138f5ecca6ada5a791e1b5b1ad7ec0"} Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.801606 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.809617184 podStartE2EDuration="9.801582013s" podCreationTimestamp="2026-01-23 16:39:56 +0000 UTC" firstStartedPulling="2026-01-23 16:39:58.69757549 +0000 UTC m=+1399.844817481" lastFinishedPulling="2026-01-23 16:40:04.689540319 +0000 UTC m=+1405.836782310" observedRunningTime="2026-01-23 16:40:05.77924083 +0000 UTC m=+1406.926482821" watchObservedRunningTime="2026-01-23 16:40:05.801582013 +0000 UTC m=+1406.948824004" Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.830873 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.520551873 podStartE2EDuration="9.830844185s" podCreationTimestamp="2026-01-23 16:39:56 +0000 UTC" firstStartedPulling="2026-01-23 16:39:58.36604483 +0000 UTC m=+1399.513286811" lastFinishedPulling="2026-01-23 16:40:04.676337132 +0000 UTC m=+1405.823579123" observedRunningTime="2026-01-23 16:40:05.75666007 +0000 UTC m=+1406.903902061" watchObservedRunningTime="2026-01-23 16:40:05.830844185 +0000 UTC m=+1406.978086176" Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.850681 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-03bd-account-create-update-9h26j"] Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.853087 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-03bd-account-create-update-9h26j" Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.856030 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41125ff2-1f34-4be1-a9f1-97c9a8987dba-operator-scripts\") pod \"aodh-db-create-lm8pg\" (UID: \"41125ff2-1f34-4be1-a9f1-97c9a8987dba\") " pod="openstack/aodh-db-create-lm8pg" Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.856181 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvtbv\" (UniqueName: \"kubernetes.io/projected/41125ff2-1f34-4be1-a9f1-97c9a8987dba-kube-api-access-wvtbv\") pod \"aodh-db-create-lm8pg\" (UID: \"41125ff2-1f34-4be1-a9f1-97c9a8987dba\") " pod="openstack/aodh-db-create-lm8pg" Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.857084 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.883074 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.693978849 podStartE2EDuration="9.882616554s" podCreationTimestamp="2026-01-23 16:39:56 +0000 UTC" firstStartedPulling="2026-01-23 16:39:58.475541019 +0000 UTC m=+1399.622783010" lastFinishedPulling="2026-01-23 16:40:04.664178724 +0000 UTC m=+1405.811420715" observedRunningTime="2026-01-23 16:40:05.806102786 +0000 UTC m=+1406.953344767" watchObservedRunningTime="2026-01-23 16:40:05.882616554 +0000 UTC m=+1407.029858545" Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.899344 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-03bd-account-create-update-9h26j"] Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.905734 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.231986884 podStartE2EDuration="9.905712608s" podCreationTimestamp="2026-01-23 16:39:56 +0000 UTC" firstStartedPulling="2026-01-23 16:39:58.012542137 +0000 UTC m=+1399.159784128" lastFinishedPulling="2026-01-23 16:40:04.686267861 +0000 UTC m=+1405.833509852" observedRunningTime="2026-01-23 16:40:05.823430994 +0000 UTC m=+1406.970672985" watchObservedRunningTime="2026-01-23 16:40:05.905712608 +0000 UTC m=+1407.052954599" Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.959039 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41125ff2-1f34-4be1-a9f1-97c9a8987dba-operator-scripts\") pod \"aodh-db-create-lm8pg\" (UID: \"41125ff2-1f34-4be1-a9f1-97c9a8987dba\") " pod="openstack/aodh-db-create-lm8pg" Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.959162 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6465851-021f-4d4b-8f64-aec0b8be2cee-operator-scripts\") pod \"aodh-03bd-account-create-update-9h26j\" (UID: \"e6465851-021f-4d4b-8f64-aec0b8be2cee\") " pod="openstack/aodh-03bd-account-create-update-9h26j" Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.959196 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6zkt\" (UniqueName: \"kubernetes.io/projected/e6465851-021f-4d4b-8f64-aec0b8be2cee-kube-api-access-v6zkt\") pod \"aodh-03bd-account-create-update-9h26j\" (UID: \"e6465851-021f-4d4b-8f64-aec0b8be2cee\") " pod="openstack/aodh-03bd-account-create-update-9h26j" Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.959250 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvtbv\" (UniqueName: \"kubernetes.io/projected/41125ff2-1f34-4be1-a9f1-97c9a8987dba-kube-api-access-wvtbv\") pod \"aodh-db-create-lm8pg\" (UID: \"41125ff2-1f34-4be1-a9f1-97c9a8987dba\") " pod="openstack/aodh-db-create-lm8pg" Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.960229 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41125ff2-1f34-4be1-a9f1-97c9a8987dba-operator-scripts\") pod \"aodh-db-create-lm8pg\" (UID: \"41125ff2-1f34-4be1-a9f1-97c9a8987dba\") " pod="openstack/aodh-db-create-lm8pg" Jan 23 16:40:05 crc kubenswrapper[4718]: I0123 16:40:05.985317 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvtbv\" (UniqueName: \"kubernetes.io/projected/41125ff2-1f34-4be1-a9f1-97c9a8987dba-kube-api-access-wvtbv\") pod \"aodh-db-create-lm8pg\" (UID: \"41125ff2-1f34-4be1-a9f1-97c9a8987dba\") " pod="openstack/aodh-db-create-lm8pg" Jan 23 16:40:06 crc kubenswrapper[4718]: I0123 16:40:06.034200 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-lm8pg" Jan 23 16:40:06 crc kubenswrapper[4718]: I0123 16:40:06.071149 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6465851-021f-4d4b-8f64-aec0b8be2cee-operator-scripts\") pod \"aodh-03bd-account-create-update-9h26j\" (UID: \"e6465851-021f-4d4b-8f64-aec0b8be2cee\") " pod="openstack/aodh-03bd-account-create-update-9h26j" Jan 23 16:40:06 crc kubenswrapper[4718]: I0123 16:40:06.071216 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6zkt\" (UniqueName: \"kubernetes.io/projected/e6465851-021f-4d4b-8f64-aec0b8be2cee-kube-api-access-v6zkt\") pod \"aodh-03bd-account-create-update-9h26j\" (UID: \"e6465851-021f-4d4b-8f64-aec0b8be2cee\") " pod="openstack/aodh-03bd-account-create-update-9h26j" Jan 23 16:40:06 crc kubenswrapper[4718]: I0123 16:40:06.072613 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6465851-021f-4d4b-8f64-aec0b8be2cee-operator-scripts\") pod \"aodh-03bd-account-create-update-9h26j\" (UID: \"e6465851-021f-4d4b-8f64-aec0b8be2cee\") " pod="openstack/aodh-03bd-account-create-update-9h26j" Jan 23 16:40:06 crc kubenswrapper[4718]: I0123 16:40:06.091955 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6zkt\" (UniqueName: \"kubernetes.io/projected/e6465851-021f-4d4b-8f64-aec0b8be2cee-kube-api-access-v6zkt\") pod \"aodh-03bd-account-create-update-9h26j\" (UID: \"e6465851-021f-4d4b-8f64-aec0b8be2cee\") " pod="openstack/aodh-03bd-account-create-update-9h26j" Jan 23 16:40:06 crc kubenswrapper[4718]: I0123 16:40:06.311469 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-03bd-account-create-update-9h26j" Jan 23 16:40:06 crc kubenswrapper[4718]: I0123 16:40:06.576515 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.692813 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/945685c3-965d-45a3-b1dc-f1fea0a489dc-config-data\") pod \"945685c3-965d-45a3-b1dc-f1fea0a489dc\" (UID: \"945685c3-965d-45a3-b1dc-f1fea0a489dc\") " Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.692853 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nwjs\" (UniqueName: \"kubernetes.io/projected/945685c3-965d-45a3-b1dc-f1fea0a489dc-kube-api-access-9nwjs\") pod \"945685c3-965d-45a3-b1dc-f1fea0a489dc\" (UID: \"945685c3-965d-45a3-b1dc-f1fea0a489dc\") " Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.693008 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/945685c3-965d-45a3-b1dc-f1fea0a489dc-config-data-custom\") pod \"945685c3-965d-45a3-b1dc-f1fea0a489dc\" (UID: \"945685c3-965d-45a3-b1dc-f1fea0a489dc\") " Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.693170 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945685c3-965d-45a3-b1dc-f1fea0a489dc-combined-ca-bundle\") pod \"945685c3-965d-45a3-b1dc-f1fea0a489dc\" (UID: \"945685c3-965d-45a3-b1dc-f1fea0a489dc\") " Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.709829 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/945685c3-965d-45a3-b1dc-f1fea0a489dc-kube-api-access-9nwjs" (OuterVolumeSpecName: "kube-api-access-9nwjs") pod "945685c3-965d-45a3-b1dc-f1fea0a489dc" (UID: "945685c3-965d-45a3-b1dc-f1fea0a489dc"). InnerVolumeSpecName "kube-api-access-9nwjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.711504 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945685c3-965d-45a3-b1dc-f1fea0a489dc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "945685c3-965d-45a3-b1dc-f1fea0a489dc" (UID: "945685c3-965d-45a3-b1dc-f1fea0a489dc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.774373 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945685c3-965d-45a3-b1dc-f1fea0a489dc-config-data" (OuterVolumeSpecName: "config-data") pod "945685c3-965d-45a3-b1dc-f1fea0a489dc" (UID: "945685c3-965d-45a3-b1dc-f1fea0a489dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.799994 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/945685c3-965d-45a3-b1dc-f1fea0a489dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.800026 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nwjs\" (UniqueName: \"kubernetes.io/projected/945685c3-965d-45a3-b1dc-f1fea0a489dc-kube-api-access-9nwjs\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.800039 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/945685c3-965d-45a3-b1dc-f1fea0a489dc-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.870074 4718 generic.go:334] "Generic (PLEG): container finished" podID="945685c3-965d-45a3-b1dc-f1fea0a489dc" containerID="8cb071e23622c4626e30cc61cd43e27536a80813c4a9a9353baaadd6ca87e5fa" exitCode=137 Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.870165 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.870155 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" event={"ID":"945685c3-965d-45a3-b1dc-f1fea0a489dc","Type":"ContainerDied","Data":"8cb071e23622c4626e30cc61cd43e27536a80813c4a9a9353baaadd6ca87e5fa"} Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.870515 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-567d9cb85d-xx4vp" event={"ID":"945685c3-965d-45a3-b1dc-f1fea0a489dc","Type":"ContainerDied","Data":"dc268aa35dcdf03db3a201ee9894f3171f7018f7af726e248b7e2d6e709e79db"} Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.870539 4718 scope.go:117] "RemoveContainer" containerID="8cb071e23622c4626e30cc61cd43e27536a80813c4a9a9353baaadd6ca87e5fa" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.876753 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945685c3-965d-45a3-b1dc-f1fea0a489dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "945685c3-965d-45a3-b1dc-f1fea0a489dc" (UID: "945685c3-965d-45a3-b1dc-f1fea0a489dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.876797 4718 generic.go:334] "Generic (PLEG): container finished" podID="0afc8879-60d8-4d63-9784-e408e7e46ec8" containerID="b469575e590a2fac46afe16fca28bf9940d84efdd7d4cfccd87a63d63835a4c6" exitCode=137 Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.876856 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7dc8889c5c-4lxb7" event={"ID":"0afc8879-60d8-4d63-9784-e408e7e46ec8","Type":"ContainerDied","Data":"b469575e590a2fac46afe16fca28bf9940d84efdd7d4cfccd87a63d63835a4c6"} Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.891967 4718 generic.go:334] "Generic (PLEG): container finished" podID="08217c1d-bb06-4978-962e-541df7337fcf" containerID="23b87573a18d1514f7aaf45c6c84e18262b5935b8e3a7cc970316c710f519143" exitCode=143 Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.892062 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08217c1d-bb06-4978-962e-541df7337fcf","Type":"ContainerDied","Data":"23b87573a18d1514f7aaf45c6c84e18262b5935b8e3a7cc970316c710f519143"} Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.903299 4718 generic.go:334] "Generic (PLEG): container finished" podID="d762625c-0c19-4652-a92f-a59698a086e3" containerID="25aa140d1c481c4e322e2dcec5a4909a3499d36780ef2a745e4cd3d276cbcca5" exitCode=0 Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.904569 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d762625c-0c19-4652-a92f-a59698a086e3","Type":"ContainerDied","Data":"25aa140d1c481c4e322e2dcec5a4909a3499d36780ef2a745e4cd3d276cbcca5"} Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.904956 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945685c3-965d-45a3-b1dc-f1fea0a489dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.913847 4718 scope.go:117] "RemoveContainer" containerID="8cb071e23622c4626e30cc61cd43e27536a80813c4a9a9353baaadd6ca87e5fa" Jan 23 16:40:07 crc kubenswrapper[4718]: E0123 16:40:06.914828 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cb071e23622c4626e30cc61cd43e27536a80813c4a9a9353baaadd6ca87e5fa\": container with ID starting with 8cb071e23622c4626e30cc61cd43e27536a80813c4a9a9353baaadd6ca87e5fa not found: ID does not exist" containerID="8cb071e23622c4626e30cc61cd43e27536a80813c4a9a9353baaadd6ca87e5fa" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.914877 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb071e23622c4626e30cc61cd43e27536a80813c4a9a9353baaadd6ca87e5fa"} err="failed to get container status \"8cb071e23622c4626e30cc61cd43e27536a80813c4a9a9353baaadd6ca87e5fa\": rpc error: code = NotFound desc = could not find container \"8cb071e23622c4626e30cc61cd43e27536a80813c4a9a9353baaadd6ca87e5fa\": container with ID starting with 8cb071e23622c4626e30cc61cd43e27536a80813c4a9a9353baaadd6ca87e5fa not found: ID does not exist" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:06.935878 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.009542 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.009725 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.084127 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.093496 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.095604 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7dc8889c5c-4lxb7" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.186414 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-lm8pg"] Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.212192 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z822r\" (UniqueName: \"kubernetes.io/projected/0afc8879-60d8-4d63-9784-e408e7e46ec8-kube-api-access-z822r\") pod \"0afc8879-60d8-4d63-9784-e408e7e46ec8\" (UID: \"0afc8879-60d8-4d63-9784-e408e7e46ec8\") " Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.212248 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d762625c-0c19-4652-a92f-a59698a086e3-log-httpd\") pod \"d762625c-0c19-4652-a92f-a59698a086e3\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.212292 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-scripts\") pod \"d762625c-0c19-4652-a92f-a59698a086e3\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.212378 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8vjn\" (UniqueName: \"kubernetes.io/projected/d762625c-0c19-4652-a92f-a59698a086e3-kube-api-access-m8vjn\") pod \"d762625c-0c19-4652-a92f-a59698a086e3\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.212799 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d762625c-0c19-4652-a92f-a59698a086e3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d762625c-0c19-4652-a92f-a59698a086e3" (UID: "d762625c-0c19-4652-a92f-a59698a086e3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.213241 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-config-data-custom\") pod \"0afc8879-60d8-4d63-9784-e408e7e46ec8\" (UID: \"0afc8879-60d8-4d63-9784-e408e7e46ec8\") " Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.213316 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-config-data\") pod \"d762625c-0c19-4652-a92f-a59698a086e3\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.213405 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-config-data\") pod \"0afc8879-60d8-4d63-9784-e408e7e46ec8\" (UID: \"0afc8879-60d8-4d63-9784-e408e7e46ec8\") " Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.213557 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-combined-ca-bundle\") pod \"d762625c-0c19-4652-a92f-a59698a086e3\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.213684 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-combined-ca-bundle\") pod \"0afc8879-60d8-4d63-9784-e408e7e46ec8\" (UID: \"0afc8879-60d8-4d63-9784-e408e7e46ec8\") " Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.213705 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-sg-core-conf-yaml\") pod \"d762625c-0c19-4652-a92f-a59698a086e3\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.213772 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d762625c-0c19-4652-a92f-a59698a086e3-run-httpd\") pod \"d762625c-0c19-4652-a92f-a59698a086e3\" (UID: \"d762625c-0c19-4652-a92f-a59698a086e3\") " Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.214455 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d762625c-0c19-4652-a92f-a59698a086e3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.214970 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d762625c-0c19-4652-a92f-a59698a086e3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d762625c-0c19-4652-a92f-a59698a086e3" (UID: "d762625c-0c19-4652-a92f-a59698a086e3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.230032 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0afc8879-60d8-4d63-9784-e408e7e46ec8" (UID: "0afc8879-60d8-4d63-9784-e408e7e46ec8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.230222 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0afc8879-60d8-4d63-9784-e408e7e46ec8-kube-api-access-z822r" (OuterVolumeSpecName: "kube-api-access-z822r") pod "0afc8879-60d8-4d63-9784-e408e7e46ec8" (UID: "0afc8879-60d8-4d63-9784-e408e7e46ec8"). InnerVolumeSpecName "kube-api-access-z822r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.230322 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-scripts" (OuterVolumeSpecName: "scripts") pod "d762625c-0c19-4652-a92f-a59698a086e3" (UID: "d762625c-0c19-4652-a92f-a59698a086e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.243607 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d762625c-0c19-4652-a92f-a59698a086e3-kube-api-access-m8vjn" (OuterVolumeSpecName: "kube-api-access-m8vjn") pod "d762625c-0c19-4652-a92f-a59698a086e3" (UID: "d762625c-0c19-4652-a92f-a59698a086e3"). InnerVolumeSpecName "kube-api-access-m8vjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.315742 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0afc8879-60d8-4d63-9784-e408e7e46ec8" (UID: "0afc8879-60d8-4d63-9784-e408e7e46ec8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.315968 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-combined-ca-bundle\") pod \"0afc8879-60d8-4d63-9784-e408e7e46ec8\" (UID: \"0afc8879-60d8-4d63-9784-e408e7e46ec8\") " Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.317060 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d762625c-0c19-4652-a92f-a59698a086e3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.317100 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z822r\" (UniqueName: \"kubernetes.io/projected/0afc8879-60d8-4d63-9784-e408e7e46ec8-kube-api-access-z822r\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.317110 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.317119 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8vjn\" (UniqueName: \"kubernetes.io/projected/d762625c-0c19-4652-a92f-a59698a086e3-kube-api-access-m8vjn\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.317127 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:07 crc kubenswrapper[4718]: W0123 16:40:07.317220 4718 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0afc8879-60d8-4d63-9784-e408e7e46ec8/volumes/kubernetes.io~secret/combined-ca-bundle Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.317234 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0afc8879-60d8-4d63-9784-e408e7e46ec8" (UID: "0afc8879-60d8-4d63-9784-e408e7e46ec8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.344491 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.344533 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.356482 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d762625c-0c19-4652-a92f-a59698a086e3" (UID: "d762625c-0c19-4652-a92f-a59698a086e3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.363794 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.363841 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.367763 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-config-data" (OuterVolumeSpecName: "config-data") pod "0afc8879-60d8-4d63-9784-e408e7e46ec8" (UID: "0afc8879-60d8-4d63-9784-e408e7e46ec8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.419800 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.419853 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afc8879-60d8-4d63-9784-e408e7e46ec8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.419866 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.421695 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d762625c-0c19-4652-a92f-a59698a086e3" (UID: "d762625c-0c19-4652-a92f-a59698a086e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.431981 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.440729 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-567d9cb85d-xx4vp"] Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.475009 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-567d9cb85d-xx4vp"] Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.484757 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-config-data" (OuterVolumeSpecName: "config-data") pod "d762625c-0c19-4652-a92f-a59698a086e3" (UID: "d762625c-0c19-4652-a92f-a59698a086e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.521374 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-7tbwv"] Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.522374 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.522401 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d762625c-0c19-4652-a92f-a59698a086e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.921987 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-lm8pg" event={"ID":"41125ff2-1f34-4be1-a9f1-97c9a8987dba","Type":"ContainerStarted","Data":"5c6cf58a6bcc4727c1466a1749d443da61ff05bd27fde8a17c470af5d757cc7f"} Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.926865 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-lm8pg" event={"ID":"41125ff2-1f34-4be1-a9f1-97c9a8987dba","Type":"ContainerStarted","Data":"d6de4d4bdb73b72f178b08055d19afcf17c7a6e47815c003089931e72ad680f4"} Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.960110 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d762625c-0c19-4652-a92f-a59698a086e3","Type":"ContainerDied","Data":"d1358c6ffb4a16c64c7b149cd62fc37a659470d2f560c9271357fbe205994f5f"} Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.960220 4718 scope.go:117] "RemoveContainer" containerID="f33196fd1ba87185c80339a0783cb42522dce7e7f418ffe1fe6b8016718881b5" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.960405 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:40:07 crc kubenswrapper[4718]: I0123 16:40:07.967784 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-lm8pg" podStartSLOduration=2.9677397660000002 podStartE2EDuration="2.967739766s" podCreationTimestamp="2026-01-23 16:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:40:07.937347345 +0000 UTC m=+1409.084589346" watchObservedRunningTime="2026-01-23 16:40:07.967739766 +0000 UTC m=+1409.114981757" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.009528 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7dc8889c5c-4lxb7" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.010428 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" podUID="3ae9ff45-f144-444f-b736-40cc69a7bda0" containerName="dnsmasq-dns" containerID="cri-o://7454ef626a552ccffcd797cb590e1657c5d4d57d46c48759899a4922da7ab185" gracePeriod=10 Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.010876 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7dc8889c5c-4lxb7" event={"ID":"0afc8879-60d8-4d63-9784-e408e7e46ec8","Type":"ContainerDied","Data":"f9b0f7a917bdbcdf8b4b3a8419b082faeb7500ad76d7582caf3e28dd3744e5a1"} Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.132557 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-03bd-account-create-update-9h26j"] Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.398955 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.432293 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1332d6a7-7bf4-4539-b1d2-e068d3cf51f8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.250:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.432680 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1332d6a7-7bf4-4539-b1d2-e068d3cf51f8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.250:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.544352 4718 scope.go:117] "RemoveContainer" containerID="27d728899aca49c541bd6e8e2114f2abd12fc1ca65f4e63fe799ea6015b07cbb" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.615558 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.629051 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.634245 4718 scope.go:117] "RemoveContainer" containerID="25aa140d1c481c4e322e2dcec5a4909a3499d36780ef2a745e4cd3d276cbcca5" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.641405 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:40:08 crc kubenswrapper[4718]: E0123 16:40:08.641981 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945685c3-965d-45a3-b1dc-f1fea0a489dc" containerName="heat-cfnapi" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.642000 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="945685c3-965d-45a3-b1dc-f1fea0a489dc" containerName="heat-cfnapi" Jan 23 16:40:08 crc kubenswrapper[4718]: E0123 16:40:08.642020 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d762625c-0c19-4652-a92f-a59698a086e3" containerName="sg-core" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.642027 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d762625c-0c19-4652-a92f-a59698a086e3" containerName="sg-core" Jan 23 16:40:08 crc kubenswrapper[4718]: E0123 16:40:08.642038 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0afc8879-60d8-4d63-9784-e408e7e46ec8" containerName="heat-api" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.642045 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0afc8879-60d8-4d63-9784-e408e7e46ec8" containerName="heat-api" Jan 23 16:40:08 crc kubenswrapper[4718]: E0123 16:40:08.642062 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d762625c-0c19-4652-a92f-a59698a086e3" containerName="proxy-httpd" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.642068 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d762625c-0c19-4652-a92f-a59698a086e3" containerName="proxy-httpd" Jan 23 16:40:08 crc kubenswrapper[4718]: E0123 16:40:08.642096 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d762625c-0c19-4652-a92f-a59698a086e3" containerName="ceilometer-central-agent" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.642104 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d762625c-0c19-4652-a92f-a59698a086e3" containerName="ceilometer-central-agent" Jan 23 16:40:08 crc kubenswrapper[4718]: E0123 16:40:08.642115 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d762625c-0c19-4652-a92f-a59698a086e3" containerName="ceilometer-notification-agent" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.642121 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d762625c-0c19-4652-a92f-a59698a086e3" containerName="ceilometer-notification-agent" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.642354 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="945685c3-965d-45a3-b1dc-f1fea0a489dc" containerName="heat-cfnapi" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.642382 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0afc8879-60d8-4d63-9784-e408e7e46ec8" containerName="heat-api" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.642399 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d762625c-0c19-4652-a92f-a59698a086e3" containerName="ceilometer-notification-agent" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.642412 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d762625c-0c19-4652-a92f-a59698a086e3" containerName="proxy-httpd" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.642422 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d762625c-0c19-4652-a92f-a59698a086e3" containerName="ceilometer-central-agent" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.642430 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d762625c-0c19-4652-a92f-a59698a086e3" containerName="sg-core" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.646526 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.654721 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7dc8889c5c-4lxb7"] Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.655153 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.655430 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.666727 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7dc8889c5c-4lxb7"] Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.683965 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.707595 4718 scope.go:117] "RemoveContainer" containerID="c532819e7be6234cf987aa349f7a64368fa6221ad0848139458560c3350dd1ea" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.754087 4718 scope.go:117] "RemoveContainer" containerID="b469575e590a2fac46afe16fca28bf9940d84efdd7d4cfccd87a63d63835a4c6" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.760900 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-scripts\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.760992 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9ds8\" (UniqueName: \"kubernetes.io/projected/a1043069-738a-4c98-ba8b-6b5b5bbd0856-kube-api-access-v9ds8\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.761040 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-config-data\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.761095 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.761125 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1043069-738a-4c98-ba8b-6b5b5bbd0856-log-httpd\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.761142 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.761201 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1043069-738a-4c98-ba8b-6b5b5bbd0856-run-httpd\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.863505 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.863995 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1043069-738a-4c98-ba8b-6b5b5bbd0856-log-httpd\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.864072 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.864324 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1043069-738a-4c98-ba8b-6b5b5bbd0856-run-httpd\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.864369 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1043069-738a-4c98-ba8b-6b5b5bbd0856-log-httpd\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.864676 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1043069-738a-4c98-ba8b-6b5b5bbd0856-run-httpd\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.864723 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-scripts\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.864831 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9ds8\" (UniqueName: \"kubernetes.io/projected/a1043069-738a-4c98-ba8b-6b5b5bbd0856-kube-api-access-v9ds8\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.864948 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-config-data\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.871655 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.886767 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-scripts\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.886803 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-config-data\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.891378 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:08 crc kubenswrapper[4718]: I0123 16:40:08.906111 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9ds8\" (UniqueName: \"kubernetes.io/projected/a1043069-738a-4c98-ba8b-6b5b5bbd0856-kube-api-access-v9ds8\") pod \"ceilometer-0\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " pod="openstack/ceilometer-0" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.000486 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.057339 4718 generic.go:334] "Generic (PLEG): container finished" podID="3ae9ff45-f144-444f-b736-40cc69a7bda0" containerID="7454ef626a552ccffcd797cb590e1657c5d4d57d46c48759899a4922da7ab185" exitCode=0 Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.057418 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" event={"ID":"3ae9ff45-f144-444f-b736-40cc69a7bda0","Type":"ContainerDied","Data":"7454ef626a552ccffcd797cb590e1657c5d4d57d46c48759899a4922da7ab185"} Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.085045 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-03bd-account-create-update-9h26j" event={"ID":"e6465851-021f-4d4b-8f64-aec0b8be2cee","Type":"ContainerStarted","Data":"a3401881c188eb0fac2e24bc3bbffde120e02d6e13d850e3f01e2c95162b65b4"} Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.085117 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-03bd-account-create-update-9h26j" event={"ID":"e6465851-021f-4d4b-8f64-aec0b8be2cee","Type":"ContainerStarted","Data":"be2e45a9d67bdb9f72c4a21ee01fcb345028f144d76a20a538ae953462ed7338"} Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.099918 4718 generic.go:334] "Generic (PLEG): container finished" podID="41125ff2-1f34-4be1-a9f1-97c9a8987dba" containerID="5c6cf58a6bcc4727c1466a1749d443da61ff05bd27fde8a17c470af5d757cc7f" exitCode=0 Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.100036 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-lm8pg" event={"ID":"41125ff2-1f34-4be1-a9f1-97c9a8987dba","Type":"ContainerDied","Data":"5c6cf58a6bcc4727c1466a1749d443da61ff05bd27fde8a17c470af5d757cc7f"} Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.119214 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-03bd-account-create-update-9h26j" podStartSLOduration=4.119193545 podStartE2EDuration="4.119193545s" podCreationTimestamp="2026-01-23 16:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:40:09.102785633 +0000 UTC m=+1410.250027654" watchObservedRunningTime="2026-01-23 16:40:09.119193545 +0000 UTC m=+1410.266435536" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.277621 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0afc8879-60d8-4d63-9784-e408e7e46ec8" path="/var/lib/kubelet/pods/0afc8879-60d8-4d63-9784-e408e7e46ec8/volumes" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.278689 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="945685c3-965d-45a3-b1dc-f1fea0a489dc" path="/var/lib/kubelet/pods/945685c3-965d-45a3-b1dc-f1fea0a489dc/volumes" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.279720 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d762625c-0c19-4652-a92f-a59698a086e3" path="/var/lib/kubelet/pods/d762625c-0c19-4652-a92f-a59698a086e3/volumes" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.330825 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.419210 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-ovsdbserver-nb\") pod \"3ae9ff45-f144-444f-b736-40cc69a7bda0\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.419643 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj5t4\" (UniqueName: \"kubernetes.io/projected/3ae9ff45-f144-444f-b736-40cc69a7bda0-kube-api-access-bj5t4\") pod \"3ae9ff45-f144-444f-b736-40cc69a7bda0\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.419675 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-ovsdbserver-sb\") pod \"3ae9ff45-f144-444f-b736-40cc69a7bda0\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.419779 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-dns-svc\") pod \"3ae9ff45-f144-444f-b736-40cc69a7bda0\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.419833 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-dns-swift-storage-0\") pod \"3ae9ff45-f144-444f-b736-40cc69a7bda0\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.419917 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-config\") pod \"3ae9ff45-f144-444f-b736-40cc69a7bda0\" (UID: \"3ae9ff45-f144-444f-b736-40cc69a7bda0\") " Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.429260 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ae9ff45-f144-444f-b736-40cc69a7bda0-kube-api-access-bj5t4" (OuterVolumeSpecName: "kube-api-access-bj5t4") pod "3ae9ff45-f144-444f-b736-40cc69a7bda0" (UID: "3ae9ff45-f144-444f-b736-40cc69a7bda0"). InnerVolumeSpecName "kube-api-access-bj5t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.524750 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj5t4\" (UniqueName: \"kubernetes.io/projected/3ae9ff45-f144-444f-b736-40cc69a7bda0-kube-api-access-bj5t4\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.546174 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3ae9ff45-f144-444f-b736-40cc69a7bda0" (UID: "3ae9ff45-f144-444f-b736-40cc69a7bda0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.558308 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ae9ff45-f144-444f-b736-40cc69a7bda0" (UID: "3ae9ff45-f144-444f-b736-40cc69a7bda0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:40:09 crc kubenswrapper[4718]: E0123 16:40:09.567586 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6465851_021f_4d4b_8f64_aec0b8be2cee.slice/crio-conmon-a3401881c188eb0fac2e24bc3bbffde120e02d6e13d850e3f01e2c95162b65b4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6465851_021f_4d4b_8f64_aec0b8be2cee.slice/crio-a3401881c188eb0fac2e24bc3bbffde120e02d6e13d850e3f01e2c95162b65b4.scope\": RecentStats: unable to find data in memory cache]" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.612326 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ae9ff45-f144-444f-b736-40cc69a7bda0" (UID: "3ae9ff45-f144-444f-b736-40cc69a7bda0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.616462 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ae9ff45-f144-444f-b736-40cc69a7bda0" (UID: "3ae9ff45-f144-444f-b736-40cc69a7bda0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.623999 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-config" (OuterVolumeSpecName: "config") pod "3ae9ff45-f144-444f-b736-40cc69a7bda0" (UID: "3ae9ff45-f144-444f-b736-40cc69a7bda0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.627603 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.627657 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.627671 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.627679 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.627688 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ae9ff45-f144-444f-b736-40cc69a7bda0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:09 crc kubenswrapper[4718]: I0123 16:40:09.636729 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:40:10 crc kubenswrapper[4718]: I0123 16:40:10.119639 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1043069-738a-4c98-ba8b-6b5b5bbd0856","Type":"ContainerStarted","Data":"82411c3a7a39b0d26d2cce33c444dbcac0e38dcca6fb44fe943457e3cbf5677c"} Jan 23 16:40:10 crc kubenswrapper[4718]: I0123 16:40:10.122177 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" event={"ID":"3ae9ff45-f144-444f-b736-40cc69a7bda0","Type":"ContainerDied","Data":"dd2fe203780c4c4ce07519172902e25642ce9b170bb130bb8a5c30f0a6889cc4"} Jan 23 16:40:10 crc kubenswrapper[4718]: I0123 16:40:10.122222 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-7tbwv" Jan 23 16:40:10 crc kubenswrapper[4718]: I0123 16:40:10.122268 4718 scope.go:117] "RemoveContainer" containerID="7454ef626a552ccffcd797cb590e1657c5d4d57d46c48759899a4922da7ab185" Jan 23 16:40:10 crc kubenswrapper[4718]: I0123 16:40:10.129925 4718 generic.go:334] "Generic (PLEG): container finished" podID="e6465851-021f-4d4b-8f64-aec0b8be2cee" containerID="a3401881c188eb0fac2e24bc3bbffde120e02d6e13d850e3f01e2c95162b65b4" exitCode=0 Jan 23 16:40:10 crc kubenswrapper[4718]: I0123 16:40:10.130099 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-03bd-account-create-update-9h26j" event={"ID":"e6465851-021f-4d4b-8f64-aec0b8be2cee","Type":"ContainerDied","Data":"a3401881c188eb0fac2e24bc3bbffde120e02d6e13d850e3f01e2c95162b65b4"} Jan 23 16:40:10 crc kubenswrapper[4718]: I0123 16:40:10.149336 4718 scope.go:117] "RemoveContainer" containerID="aa4c11d74afd8ed9452f8e5ce6e954db7506ce78803bb3f8b30827ce21f8d972" Jan 23 16:40:10 crc kubenswrapper[4718]: I0123 16:40:10.476099 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-7tbwv"] Jan 23 16:40:10 crc kubenswrapper[4718]: I0123 16:40:10.498538 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-7tbwv"] Jan 23 16:40:10 crc kubenswrapper[4718]: I0123 16:40:10.567672 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-lm8pg" Jan 23 16:40:10 crc kubenswrapper[4718]: I0123 16:40:10.659240 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41125ff2-1f34-4be1-a9f1-97c9a8987dba-operator-scripts\") pod \"41125ff2-1f34-4be1-a9f1-97c9a8987dba\" (UID: \"41125ff2-1f34-4be1-a9f1-97c9a8987dba\") " Jan 23 16:40:10 crc kubenswrapper[4718]: I0123 16:40:10.659425 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvtbv\" (UniqueName: \"kubernetes.io/projected/41125ff2-1f34-4be1-a9f1-97c9a8987dba-kube-api-access-wvtbv\") pod \"41125ff2-1f34-4be1-a9f1-97c9a8987dba\" (UID: \"41125ff2-1f34-4be1-a9f1-97c9a8987dba\") " Jan 23 16:40:10 crc kubenswrapper[4718]: I0123 16:40:10.660479 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41125ff2-1f34-4be1-a9f1-97c9a8987dba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "41125ff2-1f34-4be1-a9f1-97c9a8987dba" (UID: "41125ff2-1f34-4be1-a9f1-97c9a8987dba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:40:10 crc kubenswrapper[4718]: I0123 16:40:10.669935 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41125ff2-1f34-4be1-a9f1-97c9a8987dba-kube-api-access-wvtbv" (OuterVolumeSpecName: "kube-api-access-wvtbv") pod "41125ff2-1f34-4be1-a9f1-97c9a8987dba" (UID: "41125ff2-1f34-4be1-a9f1-97c9a8987dba"). InnerVolumeSpecName "kube-api-access-wvtbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:40:10 crc kubenswrapper[4718]: I0123 16:40:10.762395 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvtbv\" (UniqueName: \"kubernetes.io/projected/41125ff2-1f34-4be1-a9f1-97c9a8987dba-kube-api-access-wvtbv\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:10 crc kubenswrapper[4718]: I0123 16:40:10.762439 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41125ff2-1f34-4be1-a9f1-97c9a8987dba-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:11 crc kubenswrapper[4718]: I0123 16:40:11.165994 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-lm8pg" Jan 23 16:40:11 crc kubenswrapper[4718]: I0123 16:40:11.168737 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ae9ff45-f144-444f-b736-40cc69a7bda0" path="/var/lib/kubelet/pods/3ae9ff45-f144-444f-b736-40cc69a7bda0/volumes" Jan 23 16:40:11 crc kubenswrapper[4718]: I0123 16:40:11.169621 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-lm8pg" event={"ID":"41125ff2-1f34-4be1-a9f1-97c9a8987dba","Type":"ContainerDied","Data":"d6de4d4bdb73b72f178b08055d19afcf17c7a6e47815c003089931e72ad680f4"} Jan 23 16:40:11 crc kubenswrapper[4718]: I0123 16:40:11.169665 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6de4d4bdb73b72f178b08055d19afcf17c7a6e47815c003089931e72ad680f4" Jan 23 16:40:11 crc kubenswrapper[4718]: I0123 16:40:11.173133 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1043069-738a-4c98-ba8b-6b5b5bbd0856","Type":"ContainerStarted","Data":"96b8976b17351b6365b052e4ebce443ea82d252e3c1b405623008b60a79f194a"} Jan 23 16:40:11 crc kubenswrapper[4718]: I0123 16:40:11.175938 4718 generic.go:334] "Generic (PLEG): container finished" podID="f8db3c02-bc86-4e35-8cce-3fba179bfe88" containerID="5fe5268e1fc895fd279ed08c0f2fad6d958c47574bda524ef22f26b709867bae" exitCode=0 Jan 23 16:40:11 crc kubenswrapper[4718]: I0123 16:40:11.176105 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c7phc" event={"ID":"f8db3c02-bc86-4e35-8cce-3fba179bfe88","Type":"ContainerDied","Data":"5fe5268e1fc895fd279ed08c0f2fad6d958c47574bda524ef22f26b709867bae"} Jan 23 16:40:11 crc kubenswrapper[4718]: I0123 16:40:11.630824 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-03bd-account-create-update-9h26j" Jan 23 16:40:11 crc kubenswrapper[4718]: I0123 16:40:11.691156 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6465851-021f-4d4b-8f64-aec0b8be2cee-operator-scripts\") pod \"e6465851-021f-4d4b-8f64-aec0b8be2cee\" (UID: \"e6465851-021f-4d4b-8f64-aec0b8be2cee\") " Jan 23 16:40:11 crc kubenswrapper[4718]: I0123 16:40:11.691322 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6zkt\" (UniqueName: \"kubernetes.io/projected/e6465851-021f-4d4b-8f64-aec0b8be2cee-kube-api-access-v6zkt\") pod \"e6465851-021f-4d4b-8f64-aec0b8be2cee\" (UID: \"e6465851-021f-4d4b-8f64-aec0b8be2cee\") " Jan 23 16:40:11 crc kubenswrapper[4718]: I0123 16:40:11.693223 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6465851-021f-4d4b-8f64-aec0b8be2cee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6465851-021f-4d4b-8f64-aec0b8be2cee" (UID: "e6465851-021f-4d4b-8f64-aec0b8be2cee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:40:11 crc kubenswrapper[4718]: I0123 16:40:11.698619 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6465851-021f-4d4b-8f64-aec0b8be2cee-kube-api-access-v6zkt" (OuterVolumeSpecName: "kube-api-access-v6zkt") pod "e6465851-021f-4d4b-8f64-aec0b8be2cee" (UID: "e6465851-021f-4d4b-8f64-aec0b8be2cee"). InnerVolumeSpecName "kube-api-access-v6zkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:40:11 crc kubenswrapper[4718]: I0123 16:40:11.795006 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6465851-021f-4d4b-8f64-aec0b8be2cee-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:11 crc kubenswrapper[4718]: I0123 16:40:11.795043 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6zkt\" (UniqueName: \"kubernetes.io/projected/e6465851-021f-4d4b-8f64-aec0b8be2cee-kube-api-access-v6zkt\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:12 crc kubenswrapper[4718]: I0123 16:40:12.191279 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1043069-738a-4c98-ba8b-6b5b5bbd0856","Type":"ContainerStarted","Data":"6eafe82862407c38d547ea9383f9f04d0e62f1e3edcedd5d3a13a24d1fc59e08"} Jan 23 16:40:12 crc kubenswrapper[4718]: I0123 16:40:12.197328 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-03bd-account-create-update-9h26j" Jan 23 16:40:12 crc kubenswrapper[4718]: I0123 16:40:12.201522 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-03bd-account-create-update-9h26j" event={"ID":"e6465851-021f-4d4b-8f64-aec0b8be2cee","Type":"ContainerDied","Data":"be2e45a9d67bdb9f72c4a21ee01fcb345028f144d76a20a538ae953462ed7338"} Jan 23 16:40:12 crc kubenswrapper[4718]: I0123 16:40:12.201574 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be2e45a9d67bdb9f72c4a21ee01fcb345028f144d76a20a538ae953462ed7338" Jan 23 16:40:12 crc kubenswrapper[4718]: I0123 16:40:12.628021 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c7phc" Jan 23 16:40:12 crc kubenswrapper[4718]: I0123 16:40:12.720902 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8db3c02-bc86-4e35-8cce-3fba179bfe88-scripts\") pod \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\" (UID: \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\") " Jan 23 16:40:12 crc kubenswrapper[4718]: I0123 16:40:12.721095 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8db3c02-bc86-4e35-8cce-3fba179bfe88-config-data\") pod \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\" (UID: \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\") " Jan 23 16:40:12 crc kubenswrapper[4718]: I0123 16:40:12.721315 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7gkr\" (UniqueName: \"kubernetes.io/projected/f8db3c02-bc86-4e35-8cce-3fba179bfe88-kube-api-access-f7gkr\") pod \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\" (UID: \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\") " Jan 23 16:40:12 crc kubenswrapper[4718]: I0123 16:40:12.721409 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8db3c02-bc86-4e35-8cce-3fba179bfe88-combined-ca-bundle\") pod \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\" (UID: \"f8db3c02-bc86-4e35-8cce-3fba179bfe88\") " Jan 23 16:40:12 crc kubenswrapper[4718]: I0123 16:40:12.732930 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8db3c02-bc86-4e35-8cce-3fba179bfe88-scripts" (OuterVolumeSpecName: "scripts") pod "f8db3c02-bc86-4e35-8cce-3fba179bfe88" (UID: "f8db3c02-bc86-4e35-8cce-3fba179bfe88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:12 crc kubenswrapper[4718]: I0123 16:40:12.733180 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8db3c02-bc86-4e35-8cce-3fba179bfe88-kube-api-access-f7gkr" (OuterVolumeSpecName: "kube-api-access-f7gkr") pod "f8db3c02-bc86-4e35-8cce-3fba179bfe88" (UID: "f8db3c02-bc86-4e35-8cce-3fba179bfe88"). InnerVolumeSpecName "kube-api-access-f7gkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:40:12 crc kubenswrapper[4718]: I0123 16:40:12.766276 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8db3c02-bc86-4e35-8cce-3fba179bfe88-config-data" (OuterVolumeSpecName: "config-data") pod "f8db3c02-bc86-4e35-8cce-3fba179bfe88" (UID: "f8db3c02-bc86-4e35-8cce-3fba179bfe88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:12 crc kubenswrapper[4718]: I0123 16:40:12.774579 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8db3c02-bc86-4e35-8cce-3fba179bfe88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8db3c02-bc86-4e35-8cce-3fba179bfe88" (UID: "f8db3c02-bc86-4e35-8cce-3fba179bfe88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:12 crc kubenswrapper[4718]: I0123 16:40:12.826510 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8db3c02-bc86-4e35-8cce-3fba179bfe88-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:12 crc kubenswrapper[4718]: I0123 16:40:12.826577 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8db3c02-bc86-4e35-8cce-3fba179bfe88-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:12 crc kubenswrapper[4718]: I0123 16:40:12.826589 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7gkr\" (UniqueName: \"kubernetes.io/projected/f8db3c02-bc86-4e35-8cce-3fba179bfe88-kube-api-access-f7gkr\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:12 crc kubenswrapper[4718]: I0123 16:40:12.826601 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8db3c02-bc86-4e35-8cce-3fba179bfe88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:13 crc kubenswrapper[4718]: I0123 16:40:13.232422 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1043069-738a-4c98-ba8b-6b5b5bbd0856","Type":"ContainerStarted","Data":"6f338651e8bce279911d757fc8ae82dc0ad356eb883160167234e4360a0de85c"} Jan 23 16:40:13 crc kubenswrapper[4718]: I0123 16:40:13.236096 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c7phc" event={"ID":"f8db3c02-bc86-4e35-8cce-3fba179bfe88","Type":"ContainerDied","Data":"9dfb003ad524cadd48ae0974451a75ff74738742760dcd356e1d33370a263b0d"} Jan 23 16:40:13 crc kubenswrapper[4718]: I0123 16:40:13.236212 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dfb003ad524cadd48ae0974451a75ff74738742760dcd356e1d33370a263b0d" Jan 23 16:40:13 crc kubenswrapper[4718]: I0123 16:40:13.236313 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c7phc" Jan 23 16:40:13 crc kubenswrapper[4718]: I0123 16:40:13.383156 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 16:40:13 crc kubenswrapper[4718]: I0123 16:40:13.383472 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1332d6a7-7bf4-4539-b1d2-e068d3cf51f8" containerName="nova-api-api" containerID="cri-o://8233f3106f9961c829e4a42d9ad2ed32959a4d702b0fc464b3293797c22ba5a8" gracePeriod=30 Jan 23 16:40:13 crc kubenswrapper[4718]: I0123 16:40:13.383693 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1332d6a7-7bf4-4539-b1d2-e068d3cf51f8" containerName="nova-api-log" containerID="cri-o://721e7c3896883ba1f671c64b7ca0e619f95742a3f7b640494a72c9bfcd355630" gracePeriod=30 Jan 23 16:40:13 crc kubenswrapper[4718]: I0123 16:40:13.420277 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 16:40:13 crc kubenswrapper[4718]: I0123 16:40:13.420484 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7196bcab-6196-4625-b162-3ed1c69bc1ad" containerName="nova-scheduler-scheduler" containerID="cri-o://0906ce8ddda5c6f166243749547ffa84bf138f5ecca6ada5a791e1b5b1ad7ec0" gracePeriod=30 Jan 23 16:40:14 crc kubenswrapper[4718]: I0123 16:40:14.253545 4718 generic.go:334] "Generic (PLEG): container finished" podID="2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf" containerID="515c04a29fef45d29dd5ebf358fb085926754f858e699d3ad6c424457ab53fc6" exitCode=0 Jan 23 16:40:14 crc kubenswrapper[4718]: I0123 16:40:14.253611 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-98zcf" event={"ID":"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf","Type":"ContainerDied","Data":"515c04a29fef45d29dd5ebf358fb085926754f858e699d3ad6c424457ab53fc6"} Jan 23 16:40:14 crc kubenswrapper[4718]: I0123 16:40:14.257806 4718 generic.go:334] "Generic (PLEG): container finished" podID="1332d6a7-7bf4-4539-b1d2-e068d3cf51f8" containerID="721e7c3896883ba1f671c64b7ca0e619f95742a3f7b640494a72c9bfcd355630" exitCode=143 Jan 23 16:40:14 crc kubenswrapper[4718]: I0123 16:40:14.257866 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8","Type":"ContainerDied","Data":"721e7c3896883ba1f671c64b7ca0e619f95742a3f7b640494a72c9bfcd355630"} Jan 23 16:40:14 crc kubenswrapper[4718]: I0123 16:40:14.261676 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1043069-738a-4c98-ba8b-6b5b5bbd0856","Type":"ContainerStarted","Data":"91d513ce2d4d273631e7e8caf5e80bf7f24b0c9738b464af5a7eb92d63511a54"} Jan 23 16:40:14 crc kubenswrapper[4718]: I0123 16:40:14.261990 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 16:40:14 crc kubenswrapper[4718]: I0123 16:40:14.313207 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.112620803 podStartE2EDuration="6.313185536s" podCreationTimestamp="2026-01-23 16:40:08 +0000 UTC" firstStartedPulling="2026-01-23 16:40:09.629148647 +0000 UTC m=+1410.776390638" lastFinishedPulling="2026-01-23 16:40:13.82971339 +0000 UTC m=+1414.976955371" observedRunningTime="2026-01-23 16:40:14.304602393 +0000 UTC m=+1415.451844384" watchObservedRunningTime="2026-01-23 16:40:14.313185536 +0000 UTC m=+1415.460427527" Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.278481 4718 generic.go:334] "Generic (PLEG): container finished" podID="7196bcab-6196-4625-b162-3ed1c69bc1ad" containerID="0906ce8ddda5c6f166243749547ffa84bf138f5ecca6ada5a791e1b5b1ad7ec0" exitCode=0 Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.278524 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7196bcab-6196-4625-b162-3ed1c69bc1ad","Type":"ContainerDied","Data":"0906ce8ddda5c6f166243749547ffa84bf138f5ecca6ada5a791e1b5b1ad7ec0"} Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.451861 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.495818 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7196bcab-6196-4625-b162-3ed1c69bc1ad-combined-ca-bundle\") pod \"7196bcab-6196-4625-b162-3ed1c69bc1ad\" (UID: \"7196bcab-6196-4625-b162-3ed1c69bc1ad\") " Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.496012 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7196bcab-6196-4625-b162-3ed1c69bc1ad-config-data\") pod \"7196bcab-6196-4625-b162-3ed1c69bc1ad\" (UID: \"7196bcab-6196-4625-b162-3ed1c69bc1ad\") " Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.496065 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5kpr\" (UniqueName: \"kubernetes.io/projected/7196bcab-6196-4625-b162-3ed1c69bc1ad-kube-api-access-d5kpr\") pod \"7196bcab-6196-4625-b162-3ed1c69bc1ad\" (UID: \"7196bcab-6196-4625-b162-3ed1c69bc1ad\") " Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.526510 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7196bcab-6196-4625-b162-3ed1c69bc1ad-kube-api-access-d5kpr" (OuterVolumeSpecName: "kube-api-access-d5kpr") pod "7196bcab-6196-4625-b162-3ed1c69bc1ad" (UID: "7196bcab-6196-4625-b162-3ed1c69bc1ad"). InnerVolumeSpecName "kube-api-access-d5kpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.540932 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7196bcab-6196-4625-b162-3ed1c69bc1ad-config-data" (OuterVolumeSpecName: "config-data") pod "7196bcab-6196-4625-b162-3ed1c69bc1ad" (UID: "7196bcab-6196-4625-b162-3ed1c69bc1ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.556547 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7196bcab-6196-4625-b162-3ed1c69bc1ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7196bcab-6196-4625-b162-3ed1c69bc1ad" (UID: "7196bcab-6196-4625-b162-3ed1c69bc1ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.600344 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7196bcab-6196-4625-b162-3ed1c69bc1ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.600405 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7196bcab-6196-4625-b162-3ed1c69bc1ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.600417 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5kpr\" (UniqueName: \"kubernetes.io/projected/7196bcab-6196-4625-b162-3ed1c69bc1ad-kube-api-access-d5kpr\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.641488 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-98zcf" Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.704203 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-combined-ca-bundle\") pod \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\" (UID: \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\") " Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.704362 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-config-data\") pod \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\" (UID: \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\") " Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.704668 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-scripts\") pod \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\" (UID: \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\") " Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.704778 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7749\" (UniqueName: \"kubernetes.io/projected/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-kube-api-access-g7749\") pod \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\" (UID: \"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf\") " Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.709417 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-kube-api-access-g7749" (OuterVolumeSpecName: "kube-api-access-g7749") pod "2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf" (UID: "2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf"). InnerVolumeSpecName "kube-api-access-g7749". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.716971 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-scripts" (OuterVolumeSpecName: "scripts") pod "2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf" (UID: "2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.739486 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-config-data" (OuterVolumeSpecName: "config-data") pod "2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf" (UID: "2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.750234 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf" (UID: "2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.808858 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.808902 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.808914 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:15 crc kubenswrapper[4718]: I0123 16:40:15.808924 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7749\" (UniqueName: \"kubernetes.io/projected/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf-kube-api-access-g7749\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.130380 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-r8tbd"] Jan 23 16:40:16 crc kubenswrapper[4718]: E0123 16:40:16.131199 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41125ff2-1f34-4be1-a9f1-97c9a8987dba" containerName="mariadb-database-create" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.131227 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="41125ff2-1f34-4be1-a9f1-97c9a8987dba" containerName="mariadb-database-create" Jan 23 16:40:16 crc kubenswrapper[4718]: E0123 16:40:16.131242 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7196bcab-6196-4625-b162-3ed1c69bc1ad" containerName="nova-scheduler-scheduler" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.131253 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7196bcab-6196-4625-b162-3ed1c69bc1ad" containerName="nova-scheduler-scheduler" Jan 23 16:40:16 crc kubenswrapper[4718]: E0123 16:40:16.131277 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf" containerName="nova-cell1-conductor-db-sync" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.131285 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf" containerName="nova-cell1-conductor-db-sync" Jan 23 16:40:16 crc kubenswrapper[4718]: E0123 16:40:16.131301 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8db3c02-bc86-4e35-8cce-3fba179bfe88" containerName="nova-manage" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.131307 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8db3c02-bc86-4e35-8cce-3fba179bfe88" containerName="nova-manage" Jan 23 16:40:16 crc kubenswrapper[4718]: E0123 16:40:16.131354 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6465851-021f-4d4b-8f64-aec0b8be2cee" containerName="mariadb-account-create-update" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.131360 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6465851-021f-4d4b-8f64-aec0b8be2cee" containerName="mariadb-account-create-update" Jan 23 16:40:16 crc kubenswrapper[4718]: E0123 16:40:16.131370 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ae9ff45-f144-444f-b736-40cc69a7bda0" containerName="dnsmasq-dns" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.131378 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ae9ff45-f144-444f-b736-40cc69a7bda0" containerName="dnsmasq-dns" Jan 23 16:40:16 crc kubenswrapper[4718]: E0123 16:40:16.131409 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ae9ff45-f144-444f-b736-40cc69a7bda0" containerName="init" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.131416 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ae9ff45-f144-444f-b736-40cc69a7bda0" containerName="init" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.131667 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6465851-021f-4d4b-8f64-aec0b8be2cee" containerName="mariadb-account-create-update" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.131694 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7196bcab-6196-4625-b162-3ed1c69bc1ad" containerName="nova-scheduler-scheduler" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.131710 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="41125ff2-1f34-4be1-a9f1-97c9a8987dba" containerName="mariadb-database-create" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.131724 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8db3c02-bc86-4e35-8cce-3fba179bfe88" containerName="nova-manage" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.131737 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ae9ff45-f144-444f-b736-40cc69a7bda0" containerName="dnsmasq-dns" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.131749 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf" containerName="nova-cell1-conductor-db-sync" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.132975 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-r8tbd" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.135716 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-tjnkn" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.135923 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.136069 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.136244 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.176399 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-r8tbd"] Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.217761 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13056ae-8539-4c9e-bb89-c62a84bd3446-combined-ca-bundle\") pod \"aodh-db-sync-r8tbd\" (UID: \"e13056ae-8539-4c9e-bb89-c62a84bd3446\") " pod="openstack/aodh-db-sync-r8tbd" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.218120 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bvxp\" (UniqueName: \"kubernetes.io/projected/e13056ae-8539-4c9e-bb89-c62a84bd3446-kube-api-access-9bvxp\") pod \"aodh-db-sync-r8tbd\" (UID: \"e13056ae-8539-4c9e-bb89-c62a84bd3446\") " pod="openstack/aodh-db-sync-r8tbd" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.218288 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13056ae-8539-4c9e-bb89-c62a84bd3446-config-data\") pod \"aodh-db-sync-r8tbd\" (UID: \"e13056ae-8539-4c9e-bb89-c62a84bd3446\") " pod="openstack/aodh-db-sync-r8tbd" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.218488 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e13056ae-8539-4c9e-bb89-c62a84bd3446-scripts\") pod \"aodh-db-sync-r8tbd\" (UID: \"e13056ae-8539-4c9e-bb89-c62a84bd3446\") " pod="openstack/aodh-db-sync-r8tbd" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.291832 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-98zcf" event={"ID":"2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf","Type":"ContainerDied","Data":"c6568ad640ed92599e80c7449738d03536478de72663cbe91c18be66aedb707b"} Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.291876 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6568ad640ed92599e80c7449738d03536478de72663cbe91c18be66aedb707b" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.291937 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-98zcf" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.295227 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7196bcab-6196-4625-b162-3ed1c69bc1ad","Type":"ContainerDied","Data":"ae0ae3c19ed5e3ffbeb691197b02eda7f9c37e34b4270153ebc85686a92eeb50"} Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.295300 4718 scope.go:117] "RemoveContainer" containerID="0906ce8ddda5c6f166243749547ffa84bf138f5ecca6ada5a791e1b5b1ad7ec0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.295461 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.322007 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bvxp\" (UniqueName: \"kubernetes.io/projected/e13056ae-8539-4c9e-bb89-c62a84bd3446-kube-api-access-9bvxp\") pod \"aodh-db-sync-r8tbd\" (UID: \"e13056ae-8539-4c9e-bb89-c62a84bd3446\") " pod="openstack/aodh-db-sync-r8tbd" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.322260 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13056ae-8539-4c9e-bb89-c62a84bd3446-config-data\") pod \"aodh-db-sync-r8tbd\" (UID: \"e13056ae-8539-4c9e-bb89-c62a84bd3446\") " pod="openstack/aodh-db-sync-r8tbd" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.322655 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e13056ae-8539-4c9e-bb89-c62a84bd3446-scripts\") pod \"aodh-db-sync-r8tbd\" (UID: \"e13056ae-8539-4c9e-bb89-c62a84bd3446\") " pod="openstack/aodh-db-sync-r8tbd" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.322826 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13056ae-8539-4c9e-bb89-c62a84bd3446-combined-ca-bundle\") pod \"aodh-db-sync-r8tbd\" (UID: \"e13056ae-8539-4c9e-bb89-c62a84bd3446\") " pod="openstack/aodh-db-sync-r8tbd" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.331310 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13056ae-8539-4c9e-bb89-c62a84bd3446-combined-ca-bundle\") pod \"aodh-db-sync-r8tbd\" (UID: \"e13056ae-8539-4c9e-bb89-c62a84bd3446\") " pod="openstack/aodh-db-sync-r8tbd" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.348345 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e13056ae-8539-4c9e-bb89-c62a84bd3446-scripts\") pod \"aodh-db-sync-r8tbd\" (UID: \"e13056ae-8539-4c9e-bb89-c62a84bd3446\") " pod="openstack/aodh-db-sync-r8tbd" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.351904 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13056ae-8539-4c9e-bb89-c62a84bd3446-config-data\") pod \"aodh-db-sync-r8tbd\" (UID: \"e13056ae-8539-4c9e-bb89-c62a84bd3446\") " pod="openstack/aodh-db-sync-r8tbd" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.353160 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bvxp\" (UniqueName: \"kubernetes.io/projected/e13056ae-8539-4c9e-bb89-c62a84bd3446-kube-api-access-9bvxp\") pod \"aodh-db-sync-r8tbd\" (UID: \"e13056ae-8539-4c9e-bb89-c62a84bd3446\") " pod="openstack/aodh-db-sync-r8tbd" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.373069 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.392920 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.424305 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.427993 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.429473 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.441202 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.442917 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.449159 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.458389 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.465558 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-r8tbd" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.473486 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.529036 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbb7x\" (UniqueName: \"kubernetes.io/projected/7057072a-eda4-442e-9cb6-b9c2dbaebe3d-kube-api-access-gbb7x\") pod \"nova-cell1-conductor-0\" (UID: \"7057072a-eda4-442e-9cb6-b9c2dbaebe3d\") " pod="openstack/nova-cell1-conductor-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.529217 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad97822-bebe-4e44-97c5-92732ed20095-config-data\") pod \"nova-scheduler-0\" (UID: \"8ad97822-bebe-4e44-97c5-92732ed20095\") " pod="openstack/nova-scheduler-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.530774 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7057072a-eda4-442e-9cb6-b9c2dbaebe3d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7057072a-eda4-442e-9cb6-b9c2dbaebe3d\") " pod="openstack/nova-cell1-conductor-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.530831 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htg6n\" (UniqueName: \"kubernetes.io/projected/8ad97822-bebe-4e44-97c5-92732ed20095-kube-api-access-htg6n\") pod \"nova-scheduler-0\" (UID: \"8ad97822-bebe-4e44-97c5-92732ed20095\") " pod="openstack/nova-scheduler-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.530965 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7057072a-eda4-442e-9cb6-b9c2dbaebe3d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7057072a-eda4-442e-9cb6-b9c2dbaebe3d\") " pod="openstack/nova-cell1-conductor-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.531197 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad97822-bebe-4e44-97c5-92732ed20095-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8ad97822-bebe-4e44-97c5-92732ed20095\") " pod="openstack/nova-scheduler-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.633265 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbb7x\" (UniqueName: \"kubernetes.io/projected/7057072a-eda4-442e-9cb6-b9c2dbaebe3d-kube-api-access-gbb7x\") pod \"nova-cell1-conductor-0\" (UID: \"7057072a-eda4-442e-9cb6-b9c2dbaebe3d\") " pod="openstack/nova-cell1-conductor-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.633767 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad97822-bebe-4e44-97c5-92732ed20095-config-data\") pod \"nova-scheduler-0\" (UID: \"8ad97822-bebe-4e44-97c5-92732ed20095\") " pod="openstack/nova-scheduler-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.633821 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7057072a-eda4-442e-9cb6-b9c2dbaebe3d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7057072a-eda4-442e-9cb6-b9c2dbaebe3d\") " pod="openstack/nova-cell1-conductor-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.633839 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htg6n\" (UniqueName: \"kubernetes.io/projected/8ad97822-bebe-4e44-97c5-92732ed20095-kube-api-access-htg6n\") pod \"nova-scheduler-0\" (UID: \"8ad97822-bebe-4e44-97c5-92732ed20095\") " pod="openstack/nova-scheduler-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.633870 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7057072a-eda4-442e-9cb6-b9c2dbaebe3d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7057072a-eda4-442e-9cb6-b9c2dbaebe3d\") " pod="openstack/nova-cell1-conductor-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.633936 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad97822-bebe-4e44-97c5-92732ed20095-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8ad97822-bebe-4e44-97c5-92732ed20095\") " pod="openstack/nova-scheduler-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.643210 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7057072a-eda4-442e-9cb6-b9c2dbaebe3d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7057072a-eda4-442e-9cb6-b9c2dbaebe3d\") " pod="openstack/nova-cell1-conductor-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.643710 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad97822-bebe-4e44-97c5-92732ed20095-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8ad97822-bebe-4e44-97c5-92732ed20095\") " pod="openstack/nova-scheduler-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.655164 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbb7x\" (UniqueName: \"kubernetes.io/projected/7057072a-eda4-442e-9cb6-b9c2dbaebe3d-kube-api-access-gbb7x\") pod \"nova-cell1-conductor-0\" (UID: \"7057072a-eda4-442e-9cb6-b9c2dbaebe3d\") " pod="openstack/nova-cell1-conductor-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.655215 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htg6n\" (UniqueName: \"kubernetes.io/projected/8ad97822-bebe-4e44-97c5-92732ed20095-kube-api-access-htg6n\") pod \"nova-scheduler-0\" (UID: \"8ad97822-bebe-4e44-97c5-92732ed20095\") " pod="openstack/nova-scheduler-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.659264 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7057072a-eda4-442e-9cb6-b9c2dbaebe3d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7057072a-eda4-442e-9cb6-b9c2dbaebe3d\") " pod="openstack/nova-cell1-conductor-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.659843 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad97822-bebe-4e44-97c5-92732ed20095-config-data\") pod \"nova-scheduler-0\" (UID: \"8ad97822-bebe-4e44-97c5-92732ed20095\") " pod="openstack/nova-scheduler-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.787562 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 16:40:16 crc kubenswrapper[4718]: I0123 16:40:16.800579 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.101406 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-r8tbd"] Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.169188 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7196bcab-6196-4625-b162-3ed1c69bc1ad" path="/var/lib/kubelet/pods/7196bcab-6196-4625-b162-3ed1c69bc1ad/volumes" Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.317652 4718 generic.go:334] "Generic (PLEG): container finished" podID="1332d6a7-7bf4-4539-b1d2-e068d3cf51f8" containerID="8233f3106f9961c829e4a42d9ad2ed32959a4d702b0fc464b3293797c22ba5a8" exitCode=0 Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.317683 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8","Type":"ContainerDied","Data":"8233f3106f9961c829e4a42d9ad2ed32959a4d702b0fc464b3293797c22ba5a8"} Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.317730 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8","Type":"ContainerDied","Data":"3d19bd54be9db420397abeba057a20c1820a72acc3d013d0495ae717c55a1278"} Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.317743 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d19bd54be9db420397abeba057a20c1820a72acc3d013d0495ae717c55a1278" Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.322116 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-r8tbd" event={"ID":"e13056ae-8539-4c9e-bb89-c62a84bd3446","Type":"ContainerStarted","Data":"630a8c22f419c52a391600c49ed5e1b99909bc291b1764453ec6ac118132e3b0"} Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.335369 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.454353 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 16:40:17 crc kubenswrapper[4718]: W0123 16:40:17.458867 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7057072a_eda4_442e_9cb6_b9c2dbaebe3d.slice/crio-a0b27fc37d20f4599049c566319c644bc50e57e2339edd3e2b8f8bfa221417c4 WatchSource:0}: Error finding container a0b27fc37d20f4599049c566319c644bc50e57e2339edd3e2b8f8bfa221417c4: Status 404 returned error can't find the container with id a0b27fc37d20f4599049c566319c644bc50e57e2339edd3e2b8f8bfa221417c4 Jan 23 16:40:17 crc kubenswrapper[4718]: W0123 16:40:17.459427 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ad97822_bebe_4e44_97c5_92732ed20095.slice/crio-2dbba3883e7b0fb9d6e31e6fe97105f571a91e2729ccea20bf0efedb488f0d34 WatchSource:0}: Error finding container 2dbba3883e7b0fb9d6e31e6fe97105f571a91e2729ccea20bf0efedb488f0d34: Status 404 returned error can't find the container with id 2dbba3883e7b0fb9d6e31e6fe97105f571a91e2729ccea20bf0efedb488f0d34 Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.468858 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-config-data\") pod \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\" (UID: \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\") " Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.468898 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-combined-ca-bundle\") pod \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\" (UID: \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\") " Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.468943 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-logs\") pod \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\" (UID: \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\") " Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.469094 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwcgf\" (UniqueName: \"kubernetes.io/projected/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-kube-api-access-rwcgf\") pod \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\" (UID: \"1332d6a7-7bf4-4539-b1d2-e068d3cf51f8\") " Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.469886 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.473765 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-logs" (OuterVolumeSpecName: "logs") pod "1332d6a7-7bf4-4539-b1d2-e068d3cf51f8" (UID: "1332d6a7-7bf4-4539-b1d2-e068d3cf51f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.474146 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-kube-api-access-rwcgf" (OuterVolumeSpecName: "kube-api-access-rwcgf") pod "1332d6a7-7bf4-4539-b1d2-e068d3cf51f8" (UID: "1332d6a7-7bf4-4539-b1d2-e068d3cf51f8"). InnerVolumeSpecName "kube-api-access-rwcgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.526776 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1332d6a7-7bf4-4539-b1d2-e068d3cf51f8" (UID: "1332d6a7-7bf4-4539-b1d2-e068d3cf51f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.529890 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-config-data" (OuterVolumeSpecName: "config-data") pod "1332d6a7-7bf4-4539-b1d2-e068d3cf51f8" (UID: "1332d6a7-7bf4-4539-b1d2-e068d3cf51f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.572333 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.572369 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.572380 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-logs\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:17 crc kubenswrapper[4718]: I0123 16:40:17.572389 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwcgf\" (UniqueName: \"kubernetes.io/projected/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8-kube-api-access-rwcgf\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.342560 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8ad97822-bebe-4e44-97c5-92732ed20095","Type":"ContainerStarted","Data":"c99c6b045e6e7b2a371ac86b6189d273b2bba994f442187bc7513637f9a65a14"} Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.343031 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8ad97822-bebe-4e44-97c5-92732ed20095","Type":"ContainerStarted","Data":"2dbba3883e7b0fb9d6e31e6fe97105f571a91e2729ccea20bf0efedb488f0d34"} Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.345042 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.353222 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7057072a-eda4-442e-9cb6-b9c2dbaebe3d","Type":"ContainerStarted","Data":"569e1b94a55df50f5d10dae3a143d20984a5df13961cc5744e35fb1855bf9dd9"} Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.353285 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7057072a-eda4-442e-9cb6-b9c2dbaebe3d","Type":"ContainerStarted","Data":"a0b27fc37d20f4599049c566319c644bc50e57e2339edd3e2b8f8bfa221417c4"} Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.353453 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.382423 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.382404011 podStartE2EDuration="2.382404011s" podCreationTimestamp="2026-01-23 16:40:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:40:18.375092613 +0000 UTC m=+1419.522334604" watchObservedRunningTime="2026-01-23 16:40:18.382404011 +0000 UTC m=+1419.529646002" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.420401 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.451259 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.451230261 podStartE2EDuration="2.451230261s" podCreationTimestamp="2026-01-23 16:40:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:40:18.410565692 +0000 UTC m=+1419.557807673" watchObservedRunningTime="2026-01-23 16:40:18.451230261 +0000 UTC m=+1419.598472252" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.452295 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.493580 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 16:40:18 crc kubenswrapper[4718]: E0123 16:40:18.494528 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1332d6a7-7bf4-4539-b1d2-e068d3cf51f8" containerName="nova-api-log" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.494549 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1332d6a7-7bf4-4539-b1d2-e068d3cf51f8" containerName="nova-api-log" Jan 23 16:40:18 crc kubenswrapper[4718]: E0123 16:40:18.494714 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1332d6a7-7bf4-4539-b1d2-e068d3cf51f8" containerName="nova-api-api" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.494728 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1332d6a7-7bf4-4539-b1d2-e068d3cf51f8" containerName="nova-api-api" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.494952 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="1332d6a7-7bf4-4539-b1d2-e068d3cf51f8" containerName="nova-api-log" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.494986 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="1332d6a7-7bf4-4539-b1d2-e068d3cf51f8" containerName="nova-api-api" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.496467 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.498890 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.509836 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.599475 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prt9j\" (UniqueName: \"kubernetes.io/projected/37157014-8ad6-4d39-b8ad-376689a6340b-kube-api-access-prt9j\") pod \"nova-api-0\" (UID: \"37157014-8ad6-4d39-b8ad-376689a6340b\") " pod="openstack/nova-api-0" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.599599 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37157014-8ad6-4d39-b8ad-376689a6340b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"37157014-8ad6-4d39-b8ad-376689a6340b\") " pod="openstack/nova-api-0" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.599756 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37157014-8ad6-4d39-b8ad-376689a6340b-config-data\") pod \"nova-api-0\" (UID: \"37157014-8ad6-4d39-b8ad-376689a6340b\") " pod="openstack/nova-api-0" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.600123 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37157014-8ad6-4d39-b8ad-376689a6340b-logs\") pod \"nova-api-0\" (UID: \"37157014-8ad6-4d39-b8ad-376689a6340b\") " pod="openstack/nova-api-0" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.702186 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37157014-8ad6-4d39-b8ad-376689a6340b-logs\") pod \"nova-api-0\" (UID: \"37157014-8ad6-4d39-b8ad-376689a6340b\") " pod="openstack/nova-api-0" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.702279 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prt9j\" (UniqueName: \"kubernetes.io/projected/37157014-8ad6-4d39-b8ad-376689a6340b-kube-api-access-prt9j\") pod \"nova-api-0\" (UID: \"37157014-8ad6-4d39-b8ad-376689a6340b\") " pod="openstack/nova-api-0" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.702348 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37157014-8ad6-4d39-b8ad-376689a6340b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"37157014-8ad6-4d39-b8ad-376689a6340b\") " pod="openstack/nova-api-0" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.702420 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37157014-8ad6-4d39-b8ad-376689a6340b-config-data\") pod \"nova-api-0\" (UID: \"37157014-8ad6-4d39-b8ad-376689a6340b\") " pod="openstack/nova-api-0" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.702612 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37157014-8ad6-4d39-b8ad-376689a6340b-logs\") pod \"nova-api-0\" (UID: \"37157014-8ad6-4d39-b8ad-376689a6340b\") " pod="openstack/nova-api-0" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.724759 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37157014-8ad6-4d39-b8ad-376689a6340b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"37157014-8ad6-4d39-b8ad-376689a6340b\") " pod="openstack/nova-api-0" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.725047 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37157014-8ad6-4d39-b8ad-376689a6340b-config-data\") pod \"nova-api-0\" (UID: \"37157014-8ad6-4d39-b8ad-376689a6340b\") " pod="openstack/nova-api-0" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.728747 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prt9j\" (UniqueName: \"kubernetes.io/projected/37157014-8ad6-4d39-b8ad-376689a6340b-kube-api-access-prt9j\") pod \"nova-api-0\" (UID: \"37157014-8ad6-4d39-b8ad-376689a6340b\") " pod="openstack/nova-api-0" Jan 23 16:40:18 crc kubenswrapper[4718]: I0123 16:40:18.821655 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 16:40:19 crc kubenswrapper[4718]: I0123 16:40:19.180323 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1332d6a7-7bf4-4539-b1d2-e068d3cf51f8" path="/var/lib/kubelet/pods/1332d6a7-7bf4-4539-b1d2-e068d3cf51f8/volumes" Jan 23 16:40:19 crc kubenswrapper[4718]: I0123 16:40:19.376160 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 16:40:20 crc kubenswrapper[4718]: I0123 16:40:20.390286 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"37157014-8ad6-4d39-b8ad-376689a6340b","Type":"ContainerStarted","Data":"d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6"} Jan 23 16:40:20 crc kubenswrapper[4718]: I0123 16:40:20.391161 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"37157014-8ad6-4d39-b8ad-376689a6340b","Type":"ContainerStarted","Data":"401dd062ad1f06df974cd2b39d30799b1421c2ab5df4481e24ef1805e7b60031"} Jan 23 16:40:21 crc kubenswrapper[4718]: I0123 16:40:21.801281 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 16:40:23 crc kubenswrapper[4718]: I0123 16:40:23.451385 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-r8tbd" event={"ID":"e13056ae-8539-4c9e-bb89-c62a84bd3446","Type":"ContainerStarted","Data":"df4dd1404c98a52285568edef5cb0e077be4e1fba2f634a9bc8eb77cfd322e6a"} Jan 23 16:40:23 crc kubenswrapper[4718]: I0123 16:40:23.455474 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"37157014-8ad6-4d39-b8ad-376689a6340b","Type":"ContainerStarted","Data":"4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86"} Jan 23 16:40:23 crc kubenswrapper[4718]: I0123 16:40:23.495425 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-r8tbd" podStartSLOduration=2.148773836 podStartE2EDuration="7.495390145s" podCreationTimestamp="2026-01-23 16:40:16 +0000 UTC" firstStartedPulling="2026-01-23 16:40:17.114410561 +0000 UTC m=+1418.261652552" lastFinishedPulling="2026-01-23 16:40:22.46102687 +0000 UTC m=+1423.608268861" observedRunningTime="2026-01-23 16:40:23.476377361 +0000 UTC m=+1424.623619362" watchObservedRunningTime="2026-01-23 16:40:23.495390145 +0000 UTC m=+1424.642632136" Jan 23 16:40:23 crc kubenswrapper[4718]: I0123 16:40:23.510266 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.510240887 podStartE2EDuration="5.510240887s" podCreationTimestamp="2026-01-23 16:40:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:40:23.503038012 +0000 UTC m=+1424.650280003" watchObservedRunningTime="2026-01-23 16:40:23.510240887 +0000 UTC m=+1424.657482878" Jan 23 16:40:25 crc kubenswrapper[4718]: I0123 16:40:25.492554 4718 generic.go:334] "Generic (PLEG): container finished" podID="e13056ae-8539-4c9e-bb89-c62a84bd3446" containerID="df4dd1404c98a52285568edef5cb0e077be4e1fba2f634a9bc8eb77cfd322e6a" exitCode=0 Jan 23 16:40:25 crc kubenswrapper[4718]: I0123 16:40:25.492702 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-r8tbd" event={"ID":"e13056ae-8539-4c9e-bb89-c62a84bd3446","Type":"ContainerDied","Data":"df4dd1404c98a52285568edef5cb0e077be4e1fba2f634a9bc8eb77cfd322e6a"} Jan 23 16:40:26 crc kubenswrapper[4718]: I0123 16:40:26.807808 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 16:40:26 crc kubenswrapper[4718]: I0123 16:40:26.837329 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 23 16:40:26 crc kubenswrapper[4718]: I0123 16:40:26.844646 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 16:40:27 crc kubenswrapper[4718]: I0123 16:40:27.035152 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-r8tbd" Jan 23 16:40:27 crc kubenswrapper[4718]: I0123 16:40:27.157288 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13056ae-8539-4c9e-bb89-c62a84bd3446-combined-ca-bundle\") pod \"e13056ae-8539-4c9e-bb89-c62a84bd3446\" (UID: \"e13056ae-8539-4c9e-bb89-c62a84bd3446\") " Jan 23 16:40:27 crc kubenswrapper[4718]: I0123 16:40:27.157334 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13056ae-8539-4c9e-bb89-c62a84bd3446-config-data\") pod \"e13056ae-8539-4c9e-bb89-c62a84bd3446\" (UID: \"e13056ae-8539-4c9e-bb89-c62a84bd3446\") " Jan 23 16:40:27 crc kubenswrapper[4718]: I0123 16:40:27.157416 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bvxp\" (UniqueName: \"kubernetes.io/projected/e13056ae-8539-4c9e-bb89-c62a84bd3446-kube-api-access-9bvxp\") pod \"e13056ae-8539-4c9e-bb89-c62a84bd3446\" (UID: \"e13056ae-8539-4c9e-bb89-c62a84bd3446\") " Jan 23 16:40:27 crc kubenswrapper[4718]: I0123 16:40:27.157503 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e13056ae-8539-4c9e-bb89-c62a84bd3446-scripts\") pod \"e13056ae-8539-4c9e-bb89-c62a84bd3446\" (UID: \"e13056ae-8539-4c9e-bb89-c62a84bd3446\") " Jan 23 16:40:27 crc kubenswrapper[4718]: I0123 16:40:27.164329 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e13056ae-8539-4c9e-bb89-c62a84bd3446-kube-api-access-9bvxp" (OuterVolumeSpecName: "kube-api-access-9bvxp") pod "e13056ae-8539-4c9e-bb89-c62a84bd3446" (UID: "e13056ae-8539-4c9e-bb89-c62a84bd3446"). InnerVolumeSpecName "kube-api-access-9bvxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:40:27 crc kubenswrapper[4718]: I0123 16:40:27.165213 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13056ae-8539-4c9e-bb89-c62a84bd3446-scripts" (OuterVolumeSpecName: "scripts") pod "e13056ae-8539-4c9e-bb89-c62a84bd3446" (UID: "e13056ae-8539-4c9e-bb89-c62a84bd3446"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:27 crc kubenswrapper[4718]: I0123 16:40:27.191958 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13056ae-8539-4c9e-bb89-c62a84bd3446-config-data" (OuterVolumeSpecName: "config-data") pod "e13056ae-8539-4c9e-bb89-c62a84bd3446" (UID: "e13056ae-8539-4c9e-bb89-c62a84bd3446"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:27 crc kubenswrapper[4718]: I0123 16:40:27.193952 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13056ae-8539-4c9e-bb89-c62a84bd3446-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e13056ae-8539-4c9e-bb89-c62a84bd3446" (UID: "e13056ae-8539-4c9e-bb89-c62a84bd3446"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:27 crc kubenswrapper[4718]: I0123 16:40:27.261998 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13056ae-8539-4c9e-bb89-c62a84bd3446-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:27 crc kubenswrapper[4718]: I0123 16:40:27.262050 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13056ae-8539-4c9e-bb89-c62a84bd3446-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:27 crc kubenswrapper[4718]: I0123 16:40:27.262067 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bvxp\" (UniqueName: \"kubernetes.io/projected/e13056ae-8539-4c9e-bb89-c62a84bd3446-kube-api-access-9bvxp\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:27 crc kubenswrapper[4718]: I0123 16:40:27.262085 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e13056ae-8539-4c9e-bb89-c62a84bd3446-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:27 crc kubenswrapper[4718]: I0123 16:40:27.516744 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-r8tbd" Jan 23 16:40:27 crc kubenswrapper[4718]: I0123 16:40:27.516741 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-r8tbd" event={"ID":"e13056ae-8539-4c9e-bb89-c62a84bd3446","Type":"ContainerDied","Data":"630a8c22f419c52a391600c49ed5e1b99909bc291b1764453ec6ac118132e3b0"} Jan 23 16:40:27 crc kubenswrapper[4718]: I0123 16:40:27.516793 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="630a8c22f419c52a391600c49ed5e1b99909bc291b1764453ec6ac118132e3b0" Jan 23 16:40:27 crc kubenswrapper[4718]: I0123 16:40:27.562642 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 16:40:28 crc kubenswrapper[4718]: I0123 16:40:28.822111 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 16:40:28 crc kubenswrapper[4718]: I0123 16:40:28.822195 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 16:40:28 crc kubenswrapper[4718]: I0123 16:40:28.875424 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:40:28 crc kubenswrapper[4718]: I0123 16:40:28.875506 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:40:29 crc kubenswrapper[4718]: I0123 16:40:29.909796 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="37157014-8ad6-4d39-b8ad-376689a6340b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 16:40:29 crc kubenswrapper[4718]: I0123 16:40:29.910439 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="37157014-8ad6-4d39-b8ad-376689a6340b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 16:40:30 crc kubenswrapper[4718]: I0123 16:40:30.837961 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 23 16:40:30 crc kubenswrapper[4718]: E0123 16:40:30.839539 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13056ae-8539-4c9e-bb89-c62a84bd3446" containerName="aodh-db-sync" Jan 23 16:40:30 crc kubenswrapper[4718]: I0123 16:40:30.839644 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13056ae-8539-4c9e-bb89-c62a84bd3446" containerName="aodh-db-sync" Jan 23 16:40:30 crc kubenswrapper[4718]: I0123 16:40:30.840057 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e13056ae-8539-4c9e-bb89-c62a84bd3446" containerName="aodh-db-sync" Jan 23 16:40:30 crc kubenswrapper[4718]: I0123 16:40:30.843113 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 23 16:40:30 crc kubenswrapper[4718]: I0123 16:40:30.846643 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-tjnkn" Jan 23 16:40:30 crc kubenswrapper[4718]: I0123 16:40:30.846870 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 23 16:40:30 crc kubenswrapper[4718]: I0123 16:40:30.846896 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 23 16:40:30 crc kubenswrapper[4718]: I0123 16:40:30.877106 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 23 16:40:30 crc kubenswrapper[4718]: I0123 16:40:30.969300 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-combined-ca-bundle\") pod \"aodh-0\" (UID: \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\") " pod="openstack/aodh-0" Jan 23 16:40:30 crc kubenswrapper[4718]: I0123 16:40:30.969828 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8shrk\" (UniqueName: \"kubernetes.io/projected/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-kube-api-access-8shrk\") pod \"aodh-0\" (UID: \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\") " pod="openstack/aodh-0" Jan 23 16:40:30 crc kubenswrapper[4718]: I0123 16:40:30.969990 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-config-data\") pod \"aodh-0\" (UID: \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\") " pod="openstack/aodh-0" Jan 23 16:40:30 crc kubenswrapper[4718]: I0123 16:40:30.970168 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-scripts\") pod \"aodh-0\" (UID: \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\") " pod="openstack/aodh-0" Jan 23 16:40:31 crc kubenswrapper[4718]: I0123 16:40:31.072820 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8shrk\" (UniqueName: \"kubernetes.io/projected/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-kube-api-access-8shrk\") pod \"aodh-0\" (UID: \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\") " pod="openstack/aodh-0" Jan 23 16:40:31 crc kubenswrapper[4718]: I0123 16:40:31.072883 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-config-data\") pod \"aodh-0\" (UID: \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\") " pod="openstack/aodh-0" Jan 23 16:40:31 crc kubenswrapper[4718]: I0123 16:40:31.072936 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-scripts\") pod \"aodh-0\" (UID: \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\") " pod="openstack/aodh-0" Jan 23 16:40:31 crc kubenswrapper[4718]: I0123 16:40:31.073023 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-combined-ca-bundle\") pod \"aodh-0\" (UID: \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\") " pod="openstack/aodh-0" Jan 23 16:40:31 crc kubenswrapper[4718]: I0123 16:40:31.082231 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-combined-ca-bundle\") pod \"aodh-0\" (UID: \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\") " pod="openstack/aodh-0" Jan 23 16:40:31 crc kubenswrapper[4718]: I0123 16:40:31.082924 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-scripts\") pod \"aodh-0\" (UID: \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\") " pod="openstack/aodh-0" Jan 23 16:40:31 crc kubenswrapper[4718]: I0123 16:40:31.087011 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-config-data\") pod \"aodh-0\" (UID: \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\") " pod="openstack/aodh-0" Jan 23 16:40:31 crc kubenswrapper[4718]: I0123 16:40:31.105084 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8shrk\" (UniqueName: \"kubernetes.io/projected/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-kube-api-access-8shrk\") pod \"aodh-0\" (UID: \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\") " pod="openstack/aodh-0" Jan 23 16:40:31 crc kubenswrapper[4718]: I0123 16:40:31.181495 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 23 16:40:31 crc kubenswrapper[4718]: I0123 16:40:31.751690 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 23 16:40:31 crc kubenswrapper[4718]: W0123 16:40:31.757213 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55dfb1b9_ceec_4c77_aa2a_b5dffa296e39.slice/crio-d5d67788daaf5ea7a6959102cce503ccc8729db1ecf5cb0fccbf7edf21d220e9 WatchSource:0}: Error finding container d5d67788daaf5ea7a6959102cce503ccc8729db1ecf5cb0fccbf7edf21d220e9: Status 404 returned error can't find the container with id d5d67788daaf5ea7a6959102cce503ccc8729db1ecf5cb0fccbf7edf21d220e9 Jan 23 16:40:32 crc kubenswrapper[4718]: I0123 16:40:32.578942 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39","Type":"ContainerStarted","Data":"eb53aff40f16b82c9d447fcdc41f3d24a0f1a9be06e5cb15d9768819afb6426c"} Jan 23 16:40:32 crc kubenswrapper[4718]: I0123 16:40:32.579666 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39","Type":"ContainerStarted","Data":"d5d67788daaf5ea7a6959102cce503ccc8729db1ecf5cb0fccbf7edf21d220e9"} Jan 23 16:40:33 crc kubenswrapper[4718]: I0123 16:40:33.258955 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:40:33 crc kubenswrapper[4718]: I0123 16:40:33.259676 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerName="ceilometer-central-agent" containerID="cri-o://96b8976b17351b6365b052e4ebce443ea82d252e3c1b405623008b60a79f194a" gracePeriod=30 Jan 23 16:40:33 crc kubenswrapper[4718]: I0123 16:40:33.260542 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerName="proxy-httpd" containerID="cri-o://91d513ce2d4d273631e7e8caf5e80bf7f24b0c9738b464af5a7eb92d63511a54" gracePeriod=30 Jan 23 16:40:33 crc kubenswrapper[4718]: I0123 16:40:33.260599 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerName="sg-core" containerID="cri-o://6f338651e8bce279911d757fc8ae82dc0ad356eb883160167234e4360a0de85c" gracePeriod=30 Jan 23 16:40:33 crc kubenswrapper[4718]: I0123 16:40:33.260650 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerName="ceilometer-notification-agent" containerID="cri-o://6eafe82862407c38d547ea9383f9f04d0e62f1e3edcedd5d3a13a24d1fc59e08" gracePeriod=30 Jan 23 16:40:33 crc kubenswrapper[4718]: I0123 16:40:33.270145 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 23 16:40:33 crc kubenswrapper[4718]: I0123 16:40:33.594793 4718 generic.go:334] "Generic (PLEG): container finished" podID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerID="91d513ce2d4d273631e7e8caf5e80bf7f24b0c9738b464af5a7eb92d63511a54" exitCode=0 Jan 23 16:40:33 crc kubenswrapper[4718]: I0123 16:40:33.595256 4718 generic.go:334] "Generic (PLEG): container finished" podID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerID="6f338651e8bce279911d757fc8ae82dc0ad356eb883160167234e4360a0de85c" exitCode=2 Jan 23 16:40:33 crc kubenswrapper[4718]: I0123 16:40:33.594869 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1043069-738a-4c98-ba8b-6b5b5bbd0856","Type":"ContainerDied","Data":"91d513ce2d4d273631e7e8caf5e80bf7f24b0c9738b464af5a7eb92d63511a54"} Jan 23 16:40:33 crc kubenswrapper[4718]: I0123 16:40:33.595290 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1043069-738a-4c98-ba8b-6b5b5bbd0856","Type":"ContainerDied","Data":"6f338651e8bce279911d757fc8ae82dc0ad356eb883160167234e4360a0de85c"} Jan 23 16:40:33 crc kubenswrapper[4718]: I0123 16:40:33.848020 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 23 16:40:34 crc kubenswrapper[4718]: I0123 16:40:34.607081 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39","Type":"ContainerStarted","Data":"3877a95e9a8ddb4fda5030cfe94bf4526e156896babc9c4092e4b48f42623858"} Jan 23 16:40:34 crc kubenswrapper[4718]: I0123 16:40:34.610047 4718 generic.go:334] "Generic (PLEG): container finished" podID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerID="96b8976b17351b6365b052e4ebce443ea82d252e3c1b405623008b60a79f194a" exitCode=0 Jan 23 16:40:34 crc kubenswrapper[4718]: I0123 16:40:34.610075 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1043069-738a-4c98-ba8b-6b5b5bbd0856","Type":"ContainerDied","Data":"96b8976b17351b6365b052e4ebce443ea82d252e3c1b405623008b60a79f194a"} Jan 23 16:40:35 crc kubenswrapper[4718]: I0123 16:40:35.627026 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39","Type":"ContainerStarted","Data":"fc3e09a5d0cd0b1e0638d2be800cdf3bf51f6217fe0b23aedafdad338a2103a4"} Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.268014 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.324967 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.329143 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4f87301-48b3-44b2-aa3e-42bd55c78768-config-data\") pod \"d4f87301-48b3-44b2-aa3e-42bd55c78768\" (UID: \"d4f87301-48b3-44b2-aa3e-42bd55c78768\") " Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.329553 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzqns\" (UniqueName: \"kubernetes.io/projected/d4f87301-48b3-44b2-aa3e-42bd55c78768-kube-api-access-vzqns\") pod \"d4f87301-48b3-44b2-aa3e-42bd55c78768\" (UID: \"d4f87301-48b3-44b2-aa3e-42bd55c78768\") " Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.329766 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f87301-48b3-44b2-aa3e-42bd55c78768-combined-ca-bundle\") pod \"d4f87301-48b3-44b2-aa3e-42bd55c78768\" (UID: \"d4f87301-48b3-44b2-aa3e-42bd55c78768\") " Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.345714 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4f87301-48b3-44b2-aa3e-42bd55c78768-kube-api-access-vzqns" (OuterVolumeSpecName: "kube-api-access-vzqns") pod "d4f87301-48b3-44b2-aa3e-42bd55c78768" (UID: "d4f87301-48b3-44b2-aa3e-42bd55c78768"). InnerVolumeSpecName "kube-api-access-vzqns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.380746 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f87301-48b3-44b2-aa3e-42bd55c78768-config-data" (OuterVolumeSpecName: "config-data") pod "d4f87301-48b3-44b2-aa3e-42bd55c78768" (UID: "d4f87301-48b3-44b2-aa3e-42bd55c78768"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.385234 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f87301-48b3-44b2-aa3e-42bd55c78768-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4f87301-48b3-44b2-aa3e-42bd55c78768" (UID: "d4f87301-48b3-44b2-aa3e-42bd55c78768"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.431474 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08217c1d-bb06-4978-962e-541df7337fcf-logs\") pod \"08217c1d-bb06-4978-962e-541df7337fcf\" (UID: \"08217c1d-bb06-4978-962e-541df7337fcf\") " Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.431587 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkr62\" (UniqueName: \"kubernetes.io/projected/08217c1d-bb06-4978-962e-541df7337fcf-kube-api-access-dkr62\") pod \"08217c1d-bb06-4978-962e-541df7337fcf\" (UID: \"08217c1d-bb06-4978-962e-541df7337fcf\") " Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.431737 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08217c1d-bb06-4978-962e-541df7337fcf-combined-ca-bundle\") pod \"08217c1d-bb06-4978-962e-541df7337fcf\" (UID: \"08217c1d-bb06-4978-962e-541df7337fcf\") " Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.431934 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08217c1d-bb06-4978-962e-541df7337fcf-config-data\") pod \"08217c1d-bb06-4978-962e-541df7337fcf\" (UID: \"08217c1d-bb06-4978-962e-541df7337fcf\") " Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.432590 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzqns\" (UniqueName: \"kubernetes.io/projected/d4f87301-48b3-44b2-aa3e-42bd55c78768-kube-api-access-vzqns\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.432610 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f87301-48b3-44b2-aa3e-42bd55c78768-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.432623 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4f87301-48b3-44b2-aa3e-42bd55c78768-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.433313 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08217c1d-bb06-4978-962e-541df7337fcf-logs" (OuterVolumeSpecName: "logs") pod "08217c1d-bb06-4978-962e-541df7337fcf" (UID: "08217c1d-bb06-4978-962e-541df7337fcf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.437061 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08217c1d-bb06-4978-962e-541df7337fcf-kube-api-access-dkr62" (OuterVolumeSpecName: "kube-api-access-dkr62") pod "08217c1d-bb06-4978-962e-541df7337fcf" (UID: "08217c1d-bb06-4978-962e-541df7337fcf"). InnerVolumeSpecName "kube-api-access-dkr62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.464993 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08217c1d-bb06-4978-962e-541df7337fcf-config-data" (OuterVolumeSpecName: "config-data") pod "08217c1d-bb06-4978-962e-541df7337fcf" (UID: "08217c1d-bb06-4978-962e-541df7337fcf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.471959 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08217c1d-bb06-4978-962e-541df7337fcf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08217c1d-bb06-4978-962e-541df7337fcf" (UID: "08217c1d-bb06-4978-962e-541df7337fcf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.535515 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08217c1d-bb06-4978-962e-541df7337fcf-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.535546 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08217c1d-bb06-4978-962e-541df7337fcf-logs\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.535555 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkr62\" (UniqueName: \"kubernetes.io/projected/08217c1d-bb06-4978-962e-541df7337fcf-kube-api-access-dkr62\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.535568 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08217c1d-bb06-4978-962e-541df7337fcf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.643795 4718 generic.go:334] "Generic (PLEG): container finished" podID="08217c1d-bb06-4978-962e-541df7337fcf" containerID="8f0ed0bd254e1a0bdc0667bd5acf472ad21391abe81b5cf68ea2c418e90cf784" exitCode=137 Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.643926 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.646832 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08217c1d-bb06-4978-962e-541df7337fcf","Type":"ContainerDied","Data":"8f0ed0bd254e1a0bdc0667bd5acf472ad21391abe81b5cf68ea2c418e90cf784"} Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.646919 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08217c1d-bb06-4978-962e-541df7337fcf","Type":"ContainerDied","Data":"4d5444d13b6cfadf069f850373d383a48930d8baa7f7ebbcd43c9a9706de2af5"} Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.646944 4718 scope.go:117] "RemoveContainer" containerID="8f0ed0bd254e1a0bdc0667bd5acf472ad21391abe81b5cf68ea2c418e90cf784" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.652214 4718 generic.go:334] "Generic (PLEG): container finished" podID="d4f87301-48b3-44b2-aa3e-42bd55c78768" containerID="7a6c8ed996e0448731261f26a234a1439c5bb58b523527040dad7475e81affac" exitCode=137 Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.652260 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d4f87301-48b3-44b2-aa3e-42bd55c78768","Type":"ContainerDied","Data":"7a6c8ed996e0448731261f26a234a1439c5bb58b523527040dad7475e81affac"} Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.652265 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.652288 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d4f87301-48b3-44b2-aa3e-42bd55c78768","Type":"ContainerDied","Data":"af2abe38de11c3cf0ae2339b07209490156e5a05ba462e4b80a7e37c9490d441"} Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.699118 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.774669 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.792054 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 16:40:36 crc kubenswrapper[4718]: E0123 16:40:36.793041 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08217c1d-bb06-4978-962e-541df7337fcf" containerName="nova-metadata-log" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.793071 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="08217c1d-bb06-4978-962e-541df7337fcf" containerName="nova-metadata-log" Jan 23 16:40:36 crc kubenswrapper[4718]: E0123 16:40:36.793094 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4f87301-48b3-44b2-aa3e-42bd55c78768" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.793103 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f87301-48b3-44b2-aa3e-42bd55c78768" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 16:40:36 crc kubenswrapper[4718]: E0123 16:40:36.793140 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08217c1d-bb06-4978-962e-541df7337fcf" containerName="nova-metadata-metadata" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.793149 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="08217c1d-bb06-4978-962e-541df7337fcf" containerName="nova-metadata-metadata" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.793505 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="08217c1d-bb06-4978-962e-541df7337fcf" containerName="nova-metadata-log" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.793532 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="08217c1d-bb06-4978-962e-541df7337fcf" containerName="nova-metadata-metadata" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.793550 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4f87301-48b3-44b2-aa3e-42bd55c78768" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.795470 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.798659 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.799361 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.813704 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.828587 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.847109 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.856205 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-config-data\") pod \"nova-metadata-0\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " pod="openstack/nova-metadata-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.856345 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-logs\") pod \"nova-metadata-0\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " pod="openstack/nova-metadata-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.856929 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpks9\" (UniqueName: \"kubernetes.io/projected/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-kube-api-access-tpks9\") pod \"nova-metadata-0\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " pod="openstack/nova-metadata-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.857077 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " pod="openstack/nova-metadata-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.857408 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " pod="openstack/nova-metadata-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.864407 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.866202 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.868746 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.868930 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.869167 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.889968 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.960125 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec50566-57bf-4ddf-aa36-4dfe1fa36d07-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ec50566-57bf-4ddf-aa36-4dfe1fa36d07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.960190 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec50566-57bf-4ddf-aa36-4dfe1fa36d07-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ec50566-57bf-4ddf-aa36-4dfe1fa36d07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.960228 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpks9\" (UniqueName: \"kubernetes.io/projected/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-kube-api-access-tpks9\") pod \"nova-metadata-0\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " pod="openstack/nova-metadata-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.960252 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec50566-57bf-4ddf-aa36-4dfe1fa36d07-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ec50566-57bf-4ddf-aa36-4dfe1fa36d07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.960278 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " pod="openstack/nova-metadata-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.960336 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " pod="openstack/nova-metadata-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.960622 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-config-data\") pod \"nova-metadata-0\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " pod="openstack/nova-metadata-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.960709 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44dcw\" (UniqueName: \"kubernetes.io/projected/2ec50566-57bf-4ddf-aa36-4dfe1fa36d07-kube-api-access-44dcw\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ec50566-57bf-4ddf-aa36-4dfe1fa36d07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.960927 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec50566-57bf-4ddf-aa36-4dfe1fa36d07-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ec50566-57bf-4ddf-aa36-4dfe1fa36d07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.960992 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-logs\") pod \"nova-metadata-0\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " pod="openstack/nova-metadata-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.963541 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-logs\") pod \"nova-metadata-0\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " pod="openstack/nova-metadata-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.966903 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-config-data\") pod \"nova-metadata-0\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " pod="openstack/nova-metadata-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.976451 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " pod="openstack/nova-metadata-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.978188 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " pod="openstack/nova-metadata-0" Jan 23 16:40:36 crc kubenswrapper[4718]: I0123 16:40:36.979065 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpks9\" (UniqueName: \"kubernetes.io/projected/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-kube-api-access-tpks9\") pod \"nova-metadata-0\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " pod="openstack/nova-metadata-0" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.053350 4718 scope.go:117] "RemoveContainer" containerID="23b87573a18d1514f7aaf45c6c84e18262b5935b8e3a7cc970316c710f519143" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.063295 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44dcw\" (UniqueName: \"kubernetes.io/projected/2ec50566-57bf-4ddf-aa36-4dfe1fa36d07-kube-api-access-44dcw\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ec50566-57bf-4ddf-aa36-4dfe1fa36d07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.063393 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec50566-57bf-4ddf-aa36-4dfe1fa36d07-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ec50566-57bf-4ddf-aa36-4dfe1fa36d07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.063577 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec50566-57bf-4ddf-aa36-4dfe1fa36d07-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ec50566-57bf-4ddf-aa36-4dfe1fa36d07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.063646 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec50566-57bf-4ddf-aa36-4dfe1fa36d07-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ec50566-57bf-4ddf-aa36-4dfe1fa36d07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.063688 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec50566-57bf-4ddf-aa36-4dfe1fa36d07-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ec50566-57bf-4ddf-aa36-4dfe1fa36d07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.069999 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec50566-57bf-4ddf-aa36-4dfe1fa36d07-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ec50566-57bf-4ddf-aa36-4dfe1fa36d07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.070835 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec50566-57bf-4ddf-aa36-4dfe1fa36d07-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ec50566-57bf-4ddf-aa36-4dfe1fa36d07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.070922 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec50566-57bf-4ddf-aa36-4dfe1fa36d07-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ec50566-57bf-4ddf-aa36-4dfe1fa36d07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.076106 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec50566-57bf-4ddf-aa36-4dfe1fa36d07-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ec50566-57bf-4ddf-aa36-4dfe1fa36d07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.085724 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44dcw\" (UniqueName: \"kubernetes.io/projected/2ec50566-57bf-4ddf-aa36-4dfe1fa36d07-kube-api-access-44dcw\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ec50566-57bf-4ddf-aa36-4dfe1fa36d07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.113860 4718 scope.go:117] "RemoveContainer" containerID="8f0ed0bd254e1a0bdc0667bd5acf472ad21391abe81b5cf68ea2c418e90cf784" Jan 23 16:40:37 crc kubenswrapper[4718]: E0123 16:40:37.115617 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f0ed0bd254e1a0bdc0667bd5acf472ad21391abe81b5cf68ea2c418e90cf784\": container with ID starting with 8f0ed0bd254e1a0bdc0667bd5acf472ad21391abe81b5cf68ea2c418e90cf784 not found: ID does not exist" containerID="8f0ed0bd254e1a0bdc0667bd5acf472ad21391abe81b5cf68ea2c418e90cf784" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.115665 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f0ed0bd254e1a0bdc0667bd5acf472ad21391abe81b5cf68ea2c418e90cf784"} err="failed to get container status \"8f0ed0bd254e1a0bdc0667bd5acf472ad21391abe81b5cf68ea2c418e90cf784\": rpc error: code = NotFound desc = could not find container \"8f0ed0bd254e1a0bdc0667bd5acf472ad21391abe81b5cf68ea2c418e90cf784\": container with ID starting with 8f0ed0bd254e1a0bdc0667bd5acf472ad21391abe81b5cf68ea2c418e90cf784 not found: ID does not exist" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.115689 4718 scope.go:117] "RemoveContainer" containerID="23b87573a18d1514f7aaf45c6c84e18262b5935b8e3a7cc970316c710f519143" Jan 23 16:40:37 crc kubenswrapper[4718]: E0123 16:40:37.116028 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23b87573a18d1514f7aaf45c6c84e18262b5935b8e3a7cc970316c710f519143\": container with ID starting with 23b87573a18d1514f7aaf45c6c84e18262b5935b8e3a7cc970316c710f519143 not found: ID does not exist" containerID="23b87573a18d1514f7aaf45c6c84e18262b5935b8e3a7cc970316c710f519143" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.116050 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b87573a18d1514f7aaf45c6c84e18262b5935b8e3a7cc970316c710f519143"} err="failed to get container status \"23b87573a18d1514f7aaf45c6c84e18262b5935b8e3a7cc970316c710f519143\": rpc error: code = NotFound desc = could not find container \"23b87573a18d1514f7aaf45c6c84e18262b5935b8e3a7cc970316c710f519143\": container with ID starting with 23b87573a18d1514f7aaf45c6c84e18262b5935b8e3a7cc970316c710f519143 not found: ID does not exist" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.116071 4718 scope.go:117] "RemoveContainer" containerID="7a6c8ed996e0448731261f26a234a1439c5bb58b523527040dad7475e81affac" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.134080 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.146355 4718 scope.go:117] "RemoveContainer" containerID="7a6c8ed996e0448731261f26a234a1439c5bb58b523527040dad7475e81affac" Jan 23 16:40:37 crc kubenswrapper[4718]: E0123 16:40:37.146859 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a6c8ed996e0448731261f26a234a1439c5bb58b523527040dad7475e81affac\": container with ID starting with 7a6c8ed996e0448731261f26a234a1439c5bb58b523527040dad7475e81affac not found: ID does not exist" containerID="7a6c8ed996e0448731261f26a234a1439c5bb58b523527040dad7475e81affac" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.146915 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a6c8ed996e0448731261f26a234a1439c5bb58b523527040dad7475e81affac"} err="failed to get container status \"7a6c8ed996e0448731261f26a234a1439c5bb58b523527040dad7475e81affac\": rpc error: code = NotFound desc = could not find container \"7a6c8ed996e0448731261f26a234a1439c5bb58b523527040dad7475e81affac\": container with ID starting with 7a6c8ed996e0448731261f26a234a1439c5bb58b523527040dad7475e81affac not found: ID does not exist" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.155428 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08217c1d-bb06-4978-962e-541df7337fcf" path="/var/lib/kubelet/pods/08217c1d-bb06-4978-962e-541df7337fcf/volumes" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.156221 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4f87301-48b3-44b2-aa3e-42bd55c78768" path="/var/lib/kubelet/pods/d4f87301-48b3-44b2-aa3e-42bd55c78768/volumes" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.188895 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.670834 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerName="aodh-api" containerID="cri-o://eb53aff40f16b82c9d447fcdc41f3d24a0f1a9be06e5cb15d9768819afb6426c" gracePeriod=30 Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.671897 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39","Type":"ContainerStarted","Data":"087dd79f11afbe138aff708ced1f692ecfd669c3a1881fc98b2ca82ee4470e83"} Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.672324 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerName="aodh-listener" containerID="cri-o://087dd79f11afbe138aff708ced1f692ecfd669c3a1881fc98b2ca82ee4470e83" gracePeriod=30 Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.672391 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerName="aodh-notifier" containerID="cri-o://fc3e09a5d0cd0b1e0638d2be800cdf3bf51f6217fe0b23aedafdad338a2103a4" gracePeriod=30 Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.672440 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerName="aodh-evaluator" containerID="cri-o://3877a95e9a8ddb4fda5030cfe94bf4526e156896babc9c4092e4b48f42623858" gracePeriod=30 Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.703554 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.798044 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.447239449 podStartE2EDuration="7.79802182s" podCreationTimestamp="2026-01-23 16:40:30 +0000 UTC" firstStartedPulling="2026-01-23 16:40:31.763177642 +0000 UTC m=+1432.910419633" lastFinishedPulling="2026-01-23 16:40:37.113960013 +0000 UTC m=+1438.261202004" observedRunningTime="2026-01-23 16:40:37.707787192 +0000 UTC m=+1438.855029183" watchObservedRunningTime="2026-01-23 16:40:37.79802182 +0000 UTC m=+1438.945263811" Jan 23 16:40:37 crc kubenswrapper[4718]: I0123 16:40:37.841667 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.698653 4718 generic.go:334] "Generic (PLEG): container finished" podID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerID="fc3e09a5d0cd0b1e0638d2be800cdf3bf51f6217fe0b23aedafdad338a2103a4" exitCode=0 Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.699195 4718 generic.go:334] "Generic (PLEG): container finished" podID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerID="3877a95e9a8ddb4fda5030cfe94bf4526e156896babc9c4092e4b48f42623858" exitCode=0 Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.699204 4718 generic.go:334] "Generic (PLEG): container finished" podID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerID="eb53aff40f16b82c9d447fcdc41f3d24a0f1a9be06e5cb15d9768819afb6426c" exitCode=0 Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.699267 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39","Type":"ContainerDied","Data":"fc3e09a5d0cd0b1e0638d2be800cdf3bf51f6217fe0b23aedafdad338a2103a4"} Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.699292 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39","Type":"ContainerDied","Data":"3877a95e9a8ddb4fda5030cfe94bf4526e156896babc9c4092e4b48f42623858"} Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.699301 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39","Type":"ContainerDied","Data":"eb53aff40f16b82c9d447fcdc41f3d24a0f1a9be06e5cb15d9768819afb6426c"} Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.701666 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2ec50566-57bf-4ddf-aa36-4dfe1fa36d07","Type":"ContainerStarted","Data":"eda6f941fc6a1dba075b9d40308c3833af6c3c805a5887730aa45dc1f38fc8ca"} Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.701694 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2ec50566-57bf-4ddf-aa36-4dfe1fa36d07","Type":"ContainerStarted","Data":"293de5aa943c29be9c474a4f8e19154b8115d7eba7042dbdd1fd45fd6cb45e42"} Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.705104 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7bd9c89a-130a-408e-b86f-5a82e75d3ae2","Type":"ContainerStarted","Data":"e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3"} Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.705147 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7bd9c89a-130a-408e-b86f-5a82e75d3ae2","Type":"ContainerStarted","Data":"5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84"} Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.705158 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7bd9c89a-130a-408e-b86f-5a82e75d3ae2","Type":"ContainerStarted","Data":"9a0e2ed9511c4f3075899067961236d406d0d28d8c737c671bb03ae073f785c3"} Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.710008 4718 generic.go:334] "Generic (PLEG): container finished" podID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerID="6eafe82862407c38d547ea9383f9f04d0e62f1e3edcedd5d3a13a24d1fc59e08" exitCode=0 Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.710046 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1043069-738a-4c98-ba8b-6b5b5bbd0856","Type":"ContainerDied","Data":"6eafe82862407c38d547ea9383f9f04d0e62f1e3edcedd5d3a13a24d1fc59e08"} Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.738587 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.738561809 podStartE2EDuration="2.738561809s" podCreationTimestamp="2026-01-23 16:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:40:38.727803089 +0000 UTC m=+1439.875045090" watchObservedRunningTime="2026-01-23 16:40:38.738561809 +0000 UTC m=+1439.885803820" Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.753962 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.7539377849999997 podStartE2EDuration="2.753937785s" podCreationTimestamp="2026-01-23 16:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:40:38.75042506 +0000 UTC m=+1439.897667061" watchObservedRunningTime="2026-01-23 16:40:38.753937785 +0000 UTC m=+1439.901179776" Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.827861 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.828806 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.833107 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.834230 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 16:40:38 crc kubenswrapper[4718]: I0123 16:40:38.973143 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.045975 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9ds8\" (UniqueName: \"kubernetes.io/projected/a1043069-738a-4c98-ba8b-6b5b5bbd0856-kube-api-access-v9ds8\") pod \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.047265 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-combined-ca-bundle\") pod \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.047295 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1043069-738a-4c98-ba8b-6b5b5bbd0856-log-httpd\") pod \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.047466 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-sg-core-conf-yaml\") pod \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.047506 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-scripts\") pod \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.047562 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1043069-738a-4c98-ba8b-6b5b5bbd0856-run-httpd\") pod \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.047734 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-config-data\") pod \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\" (UID: \"a1043069-738a-4c98-ba8b-6b5b5bbd0856\") " Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.048810 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1043069-738a-4c98-ba8b-6b5b5bbd0856-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a1043069-738a-4c98-ba8b-6b5b5bbd0856" (UID: "a1043069-738a-4c98-ba8b-6b5b5bbd0856"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.049467 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1043069-738a-4c98-ba8b-6b5b5bbd0856-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.049531 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1043069-738a-4c98-ba8b-6b5b5bbd0856-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a1043069-738a-4c98-ba8b-6b5b5bbd0856" (UID: "a1043069-738a-4c98-ba8b-6b5b5bbd0856"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.059923 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-scripts" (OuterVolumeSpecName: "scripts") pod "a1043069-738a-4c98-ba8b-6b5b5bbd0856" (UID: "a1043069-738a-4c98-ba8b-6b5b5bbd0856"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.060127 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1043069-738a-4c98-ba8b-6b5b5bbd0856-kube-api-access-v9ds8" (OuterVolumeSpecName: "kube-api-access-v9ds8") pod "a1043069-738a-4c98-ba8b-6b5b5bbd0856" (UID: "a1043069-738a-4c98-ba8b-6b5b5bbd0856"). InnerVolumeSpecName "kube-api-access-v9ds8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.088099 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a1043069-738a-4c98-ba8b-6b5b5bbd0856" (UID: "a1043069-738a-4c98-ba8b-6b5b5bbd0856"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.154977 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.155014 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.155033 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9ds8\" (UniqueName: \"kubernetes.io/projected/a1043069-738a-4c98-ba8b-6b5b5bbd0856-kube-api-access-v9ds8\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.155044 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1043069-738a-4c98-ba8b-6b5b5bbd0856-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.214032 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-config-data" (OuterVolumeSpecName: "config-data") pod "a1043069-738a-4c98-ba8b-6b5b5bbd0856" (UID: "a1043069-738a-4c98-ba8b-6b5b5bbd0856"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.224988 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1043069-738a-4c98-ba8b-6b5b5bbd0856" (UID: "a1043069-738a-4c98-ba8b-6b5b5bbd0856"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.259950 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.259988 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1043069-738a-4c98-ba8b-6b5b5bbd0856-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.727331 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1043069-738a-4c98-ba8b-6b5b5bbd0856","Type":"ContainerDied","Data":"82411c3a7a39b0d26d2cce33c444dbcac0e38dcca6fb44fe943457e3cbf5677c"} Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.727950 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.727976 4718 scope.go:117] "RemoveContainer" containerID="91d513ce2d4d273631e7e8caf5e80bf7f24b0c9738b464af5a7eb92d63511a54" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.727764 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.736416 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.757183 4718 scope.go:117] "RemoveContainer" containerID="6f338651e8bce279911d757fc8ae82dc0ad356eb883160167234e4360a0de85c" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.793894 4718 scope.go:117] "RemoveContainer" containerID="6eafe82862407c38d547ea9383f9f04d0e62f1e3edcedd5d3a13a24d1fc59e08" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.808681 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.825839 4718 scope.go:117] "RemoveContainer" containerID="96b8976b17351b6365b052e4ebce443ea82d252e3c1b405623008b60a79f194a" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.826027 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.845359 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:40:39 crc kubenswrapper[4718]: E0123 16:40:39.846169 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerName="proxy-httpd" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.846247 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerName="proxy-httpd" Jan 23 16:40:39 crc kubenswrapper[4718]: E0123 16:40:39.846329 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerName="ceilometer-central-agent" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.846380 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerName="ceilometer-central-agent" Jan 23 16:40:39 crc kubenswrapper[4718]: E0123 16:40:39.846445 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerName="ceilometer-notification-agent" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.846494 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerName="ceilometer-notification-agent" Jan 23 16:40:39 crc kubenswrapper[4718]: E0123 16:40:39.846542 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerName="sg-core" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.846597 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerName="sg-core" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.846907 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerName="ceilometer-notification-agent" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.846996 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerName="sg-core" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.847074 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerName="proxy-httpd" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.847161 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" containerName="ceilometer-central-agent" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.849427 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.864297 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.866981 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.867255 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.973231 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-fsdtq"] Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.975569 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.979999 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbjs7\" (UniqueName: \"kubernetes.io/projected/bc2fa59e-66c4-44b0-941a-3294b39afee9-kube-api-access-gbjs7\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.980179 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.980255 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc2fa59e-66c4-44b0-941a-3294b39afee9-log-httpd\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.980297 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.980568 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-config-data\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.980686 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc2fa59e-66c4-44b0-941a-3294b39afee9-run-httpd\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.980814 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-scripts\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:39 crc kubenswrapper[4718]: I0123 16:40:39.989419 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-fsdtq"] Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.084525 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-fsdtq\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.084610 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-scripts\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.084679 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbjs7\" (UniqueName: \"kubernetes.io/projected/bc2fa59e-66c4-44b0-941a-3294b39afee9-kube-api-access-gbjs7\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.084702 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-fsdtq\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.084752 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.084804 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc2fa59e-66c4-44b0-941a-3294b39afee9-log-httpd\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.084825 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-config\") pod \"dnsmasq-dns-f84f9ccf-fsdtq\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.084847 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.084925 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5cbr\" (UniqueName: \"kubernetes.io/projected/1b08d018-0695-4abc-8779-8d448c1ac2c2-kube-api-access-x5cbr\") pod \"dnsmasq-dns-f84f9ccf-fsdtq\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.084967 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-fsdtq\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.085002 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-fsdtq\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.085042 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-config-data\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.085135 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc2fa59e-66c4-44b0-941a-3294b39afee9-run-httpd\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.085806 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc2fa59e-66c4-44b0-941a-3294b39afee9-run-httpd\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.086189 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc2fa59e-66c4-44b0-941a-3294b39afee9-log-httpd\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.092877 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-scripts\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.095503 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-config-data\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.100345 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.102059 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.104780 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbjs7\" (UniqueName: \"kubernetes.io/projected/bc2fa59e-66c4-44b0-941a-3294b39afee9-kube-api-access-gbjs7\") pod \"ceilometer-0\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " pod="openstack/ceilometer-0" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.184967 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.186655 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-fsdtq\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.187480 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-fsdtq\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.188074 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-fsdtq\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.188244 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-config\") pod \"dnsmasq-dns-f84f9ccf-fsdtq\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.188426 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5cbr\" (UniqueName: \"kubernetes.io/projected/1b08d018-0695-4abc-8779-8d448c1ac2c2-kube-api-access-x5cbr\") pod \"dnsmasq-dns-f84f9ccf-fsdtq\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.188474 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-fsdtq\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.188534 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-fsdtq\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.189060 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-config\") pod \"dnsmasq-dns-f84f9ccf-fsdtq\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.189360 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-fsdtq\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.189381 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-fsdtq\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.189616 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-fsdtq\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.212231 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5cbr\" (UniqueName: \"kubernetes.io/projected/1b08d018-0695-4abc-8779-8d448c1ac2c2-kube-api-access-x5cbr\") pod \"dnsmasq-dns-f84f9ccf-fsdtq\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.303202 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:40 crc kubenswrapper[4718]: I0123 16:40:40.785657 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:40:41 crc kubenswrapper[4718]: I0123 16:40:41.056941 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-fsdtq"] Jan 23 16:40:41 crc kubenswrapper[4718]: I0123 16:40:41.155321 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1043069-738a-4c98-ba8b-6b5b5bbd0856" path="/var/lib/kubelet/pods/a1043069-738a-4c98-ba8b-6b5b5bbd0856/volumes" Jan 23 16:40:41 crc kubenswrapper[4718]: I0123 16:40:41.756429 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc2fa59e-66c4-44b0-941a-3294b39afee9","Type":"ContainerStarted","Data":"5c51516eefe825ee4abe8e5f439fbeb62479dcf0886883fed90323f6d7eb0159"} Jan 23 16:40:41 crc kubenswrapper[4718]: I0123 16:40:41.756976 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc2fa59e-66c4-44b0-941a-3294b39afee9","Type":"ContainerStarted","Data":"f9f6b300b6acb9d938bcd8e217089232a68ac9134f2a144234decb8d38cb70ae"} Jan 23 16:40:41 crc kubenswrapper[4718]: I0123 16:40:41.759379 4718 generic.go:334] "Generic (PLEG): container finished" podID="1b08d018-0695-4abc-8779-8d448c1ac2c2" containerID="9ea0570e93f4aea256b10516622783bdfafc9b1709d2733181190e89d2277159" exitCode=0 Jan 23 16:40:41 crc kubenswrapper[4718]: I0123 16:40:41.759524 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" event={"ID":"1b08d018-0695-4abc-8779-8d448c1ac2c2","Type":"ContainerDied","Data":"9ea0570e93f4aea256b10516622783bdfafc9b1709d2733181190e89d2277159"} Jan 23 16:40:41 crc kubenswrapper[4718]: I0123 16:40:41.759547 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" event={"ID":"1b08d018-0695-4abc-8779-8d448c1ac2c2","Type":"ContainerStarted","Data":"90d748095fd3914e9ff43a829f818e9ec10a8fd0fc0c49ccbbd1224c0034cd28"} Jan 23 16:40:42 crc kubenswrapper[4718]: I0123 16:40:42.135031 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 16:40:42 crc kubenswrapper[4718]: I0123 16:40:42.135400 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 16:40:42 crc kubenswrapper[4718]: I0123 16:40:42.189586 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:42 crc kubenswrapper[4718]: I0123 16:40:42.530751 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 16:40:42 crc kubenswrapper[4718]: I0123 16:40:42.577674 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:40:42 crc kubenswrapper[4718]: I0123 16:40:42.772532 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc2fa59e-66c4-44b0-941a-3294b39afee9","Type":"ContainerStarted","Data":"bf3ef4f65fa05c64fc07f59712ae444cd96c5a08b76af09a9e1af4aae4aa9acb"} Jan 23 16:40:42 crc kubenswrapper[4718]: I0123 16:40:42.776817 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" event={"ID":"1b08d018-0695-4abc-8779-8d448c1ac2c2","Type":"ContainerStarted","Data":"0f8abfdbe5d6b69433d92944bc28ddd508f31829fa8b1391ad2932a3e2b16118"} Jan 23 16:40:42 crc kubenswrapper[4718]: I0123 16:40:42.777019 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="37157014-8ad6-4d39-b8ad-376689a6340b" containerName="nova-api-log" containerID="cri-o://d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6" gracePeriod=30 Jan 23 16:40:42 crc kubenswrapper[4718]: I0123 16:40:42.777069 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="37157014-8ad6-4d39-b8ad-376689a6340b" containerName="nova-api-api" containerID="cri-o://4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86" gracePeriod=30 Jan 23 16:40:42 crc kubenswrapper[4718]: I0123 16:40:42.777196 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:42 crc kubenswrapper[4718]: I0123 16:40:42.807242 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" podStartSLOduration=3.807222589 podStartE2EDuration="3.807222589s" podCreationTimestamp="2026-01-23 16:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:40:42.801942676 +0000 UTC m=+1443.949184667" watchObservedRunningTime="2026-01-23 16:40:42.807222589 +0000 UTC m=+1443.954464580" Jan 23 16:40:43 crc kubenswrapper[4718]: I0123 16:40:43.789982 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc2fa59e-66c4-44b0-941a-3294b39afee9","Type":"ContainerStarted","Data":"06d21284d53f67a8975cbd93c86ca1862ea2dfc571dc809fb31fc12671283750"} Jan 23 16:40:43 crc kubenswrapper[4718]: I0123 16:40:43.792391 4718 generic.go:334] "Generic (PLEG): container finished" podID="37157014-8ad6-4d39-b8ad-376689a6340b" containerID="d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6" exitCode=143 Jan 23 16:40:43 crc kubenswrapper[4718]: I0123 16:40:43.792477 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"37157014-8ad6-4d39-b8ad-376689a6340b","Type":"ContainerDied","Data":"d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6"} Jan 23 16:40:45 crc kubenswrapper[4718]: I0123 16:40:45.818369 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc2fa59e-66c4-44b0-941a-3294b39afee9","Type":"ContainerStarted","Data":"1c81664e7d2a0be63987229b6547db7ae27eb68ead50a930d1d111e85807846f"} Jan 23 16:40:45 crc kubenswrapper[4718]: I0123 16:40:45.819349 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 16:40:45 crc kubenswrapper[4718]: I0123 16:40:45.818646 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerName="ceilometer-central-agent" containerID="cri-o://5c51516eefe825ee4abe8e5f439fbeb62479dcf0886883fed90323f6d7eb0159" gracePeriod=30 Jan 23 16:40:45 crc kubenswrapper[4718]: I0123 16:40:45.819487 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerName="proxy-httpd" containerID="cri-o://1c81664e7d2a0be63987229b6547db7ae27eb68ead50a930d1d111e85807846f" gracePeriod=30 Jan 23 16:40:45 crc kubenswrapper[4718]: I0123 16:40:45.819563 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerName="ceilometer-notification-agent" containerID="cri-o://bf3ef4f65fa05c64fc07f59712ae444cd96c5a08b76af09a9e1af4aae4aa9acb" gracePeriod=30 Jan 23 16:40:45 crc kubenswrapper[4718]: I0123 16:40:45.819619 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerName="sg-core" containerID="cri-o://06d21284d53f67a8975cbd93c86ca1862ea2dfc571dc809fb31fc12671283750" gracePeriod=30 Jan 23 16:40:45 crc kubenswrapper[4718]: I0123 16:40:45.855347 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.310960708 podStartE2EDuration="6.855313837s" podCreationTimestamp="2026-01-23 16:40:39 +0000 UTC" firstStartedPulling="2026-01-23 16:40:40.776202439 +0000 UTC m=+1441.923444430" lastFinishedPulling="2026-01-23 16:40:44.320555578 +0000 UTC m=+1445.467797559" observedRunningTime="2026-01-23 16:40:45.841009221 +0000 UTC m=+1446.988251222" watchObservedRunningTime="2026-01-23 16:40:45.855313837 +0000 UTC m=+1447.002555828" Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.714112 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.810590 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37157014-8ad6-4d39-b8ad-376689a6340b-combined-ca-bundle\") pod \"37157014-8ad6-4d39-b8ad-376689a6340b\" (UID: \"37157014-8ad6-4d39-b8ad-376689a6340b\") " Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.810759 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37157014-8ad6-4d39-b8ad-376689a6340b-logs\") pod \"37157014-8ad6-4d39-b8ad-376689a6340b\" (UID: \"37157014-8ad6-4d39-b8ad-376689a6340b\") " Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.810969 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prt9j\" (UniqueName: \"kubernetes.io/projected/37157014-8ad6-4d39-b8ad-376689a6340b-kube-api-access-prt9j\") pod \"37157014-8ad6-4d39-b8ad-376689a6340b\" (UID: \"37157014-8ad6-4d39-b8ad-376689a6340b\") " Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.811020 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37157014-8ad6-4d39-b8ad-376689a6340b-config-data\") pod \"37157014-8ad6-4d39-b8ad-376689a6340b\" (UID: \"37157014-8ad6-4d39-b8ad-376689a6340b\") " Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.811485 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37157014-8ad6-4d39-b8ad-376689a6340b-logs" (OuterVolumeSpecName: "logs") pod "37157014-8ad6-4d39-b8ad-376689a6340b" (UID: "37157014-8ad6-4d39-b8ad-376689a6340b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.812107 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37157014-8ad6-4d39-b8ad-376689a6340b-logs\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.832097 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37157014-8ad6-4d39-b8ad-376689a6340b-kube-api-access-prt9j" (OuterVolumeSpecName: "kube-api-access-prt9j") pod "37157014-8ad6-4d39-b8ad-376689a6340b" (UID: "37157014-8ad6-4d39-b8ad-376689a6340b"). InnerVolumeSpecName "kube-api-access-prt9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.844317 4718 generic.go:334] "Generic (PLEG): container finished" podID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerID="1c81664e7d2a0be63987229b6547db7ae27eb68ead50a930d1d111e85807846f" exitCode=0 Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.844356 4718 generic.go:334] "Generic (PLEG): container finished" podID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerID="06d21284d53f67a8975cbd93c86ca1862ea2dfc571dc809fb31fc12671283750" exitCode=2 Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.844365 4718 generic.go:334] "Generic (PLEG): container finished" podID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerID="bf3ef4f65fa05c64fc07f59712ae444cd96c5a08b76af09a9e1af4aae4aa9acb" exitCode=0 Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.844408 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc2fa59e-66c4-44b0-941a-3294b39afee9","Type":"ContainerDied","Data":"1c81664e7d2a0be63987229b6547db7ae27eb68ead50a930d1d111e85807846f"} Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.844440 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc2fa59e-66c4-44b0-941a-3294b39afee9","Type":"ContainerDied","Data":"06d21284d53f67a8975cbd93c86ca1862ea2dfc571dc809fb31fc12671283750"} Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.844453 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc2fa59e-66c4-44b0-941a-3294b39afee9","Type":"ContainerDied","Data":"bf3ef4f65fa05c64fc07f59712ae444cd96c5a08b76af09a9e1af4aae4aa9acb"} Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.847758 4718 generic.go:334] "Generic (PLEG): container finished" podID="37157014-8ad6-4d39-b8ad-376689a6340b" containerID="4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86" exitCode=0 Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.847793 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"37157014-8ad6-4d39-b8ad-376689a6340b","Type":"ContainerDied","Data":"4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86"} Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.847817 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"37157014-8ad6-4d39-b8ad-376689a6340b","Type":"ContainerDied","Data":"401dd062ad1f06df974cd2b39d30799b1421c2ab5df4481e24ef1805e7b60031"} Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.847835 4718 scope.go:117] "RemoveContainer" containerID="4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86" Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.848014 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.862995 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37157014-8ad6-4d39-b8ad-376689a6340b-config-data" (OuterVolumeSpecName: "config-data") pod "37157014-8ad6-4d39-b8ad-376689a6340b" (UID: "37157014-8ad6-4d39-b8ad-376689a6340b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.887889 4718 scope.go:117] "RemoveContainer" containerID="d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6" Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.892806 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37157014-8ad6-4d39-b8ad-376689a6340b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37157014-8ad6-4d39-b8ad-376689a6340b" (UID: "37157014-8ad6-4d39-b8ad-376689a6340b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.917134 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37157014-8ad6-4d39-b8ad-376689a6340b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.917176 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prt9j\" (UniqueName: \"kubernetes.io/projected/37157014-8ad6-4d39-b8ad-376689a6340b-kube-api-access-prt9j\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.917195 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37157014-8ad6-4d39-b8ad-376689a6340b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.922258 4718 scope.go:117] "RemoveContainer" containerID="4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86" Jan 23 16:40:46 crc kubenswrapper[4718]: E0123 16:40:46.923371 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86\": container with ID starting with 4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86 not found: ID does not exist" containerID="4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86" Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.923426 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86"} err="failed to get container status \"4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86\": rpc error: code = NotFound desc = could not find container \"4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86\": container with ID starting with 4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86 not found: ID does not exist" Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.923459 4718 scope.go:117] "RemoveContainer" containerID="d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6" Jan 23 16:40:46 crc kubenswrapper[4718]: E0123 16:40:46.923869 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6\": container with ID starting with d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6 not found: ID does not exist" containerID="d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6" Jan 23 16:40:46 crc kubenswrapper[4718]: I0123 16:40:46.923893 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6"} err="failed to get container status \"d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6\": rpc error: code = NotFound desc = could not find container \"d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6\": container with ID starting with d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6 not found: ID does not exist" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.134777 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.134829 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.192670 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.213661 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.229183 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.249320 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.256398 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 16:40:47 crc kubenswrapper[4718]: E0123 16:40:47.257309 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37157014-8ad6-4d39-b8ad-376689a6340b" containerName="nova-api-api" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.257334 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="37157014-8ad6-4d39-b8ad-376689a6340b" containerName="nova-api-api" Jan 23 16:40:47 crc kubenswrapper[4718]: E0123 16:40:47.257352 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37157014-8ad6-4d39-b8ad-376689a6340b" containerName="nova-api-log" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.257361 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="37157014-8ad6-4d39-b8ad-376689a6340b" containerName="nova-api-log" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.257714 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="37157014-8ad6-4d39-b8ad-376689a6340b" containerName="nova-api-api" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.257755 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="37157014-8ad6-4d39-b8ad-376689a6340b" containerName="nova-api-log" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.259383 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.264207 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.264622 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.265171 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.269702 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.431831 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-config-data\") pod \"nova-api-0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.431946 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.432003 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-public-tls-certs\") pod \"nova-api-0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.432223 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bfs9\" (UniqueName: \"kubernetes.io/projected/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-kube-api-access-2bfs9\") pod \"nova-api-0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.432264 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-logs\") pod \"nova-api-0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.432313 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.533897 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bfs9\" (UniqueName: \"kubernetes.io/projected/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-kube-api-access-2bfs9\") pod \"nova-api-0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.533963 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-logs\") pod \"nova-api-0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.534024 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.534138 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-config-data\") pod \"nova-api-0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.534189 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.534236 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-public-tls-certs\") pod \"nova-api-0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.534323 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-logs\") pod \"nova-api-0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.542691 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.543061 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-public-tls-certs\") pod \"nova-api-0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.543146 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-config-data\") pod \"nova-api-0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.555439 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.558127 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bfs9\" (UniqueName: \"kubernetes.io/projected/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-kube-api-access-2bfs9\") pod \"nova-api-0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.576524 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 16:40:47 crc kubenswrapper[4718]: I0123 16:40:47.895768 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.127104 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.150811 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7bd9c89a-130a-408e-b86f-5a82e75d3ae2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.150905 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7bd9c89a-130a-408e-b86f-5a82e75d3ae2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.198891 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-bpll2"] Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.243498 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bpll2" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.246767 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bpll2"] Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.257756 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.258551 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.349530 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g4jgb"] Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.352077 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4jgb" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.356770 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4jgb"] Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.360441 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eeffb69-5654-4e1d-ae21-580a5a235246-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bpll2\" (UID: \"9eeffb69-5654-4e1d-ae21-580a5a235246\") " pod="openstack/nova-cell1-cell-mapping-bpll2" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.360594 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eeffb69-5654-4e1d-ae21-580a5a235246-scripts\") pod \"nova-cell1-cell-mapping-bpll2\" (UID: \"9eeffb69-5654-4e1d-ae21-580a5a235246\") " pod="openstack/nova-cell1-cell-mapping-bpll2" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.360675 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bnpc\" (UniqueName: \"kubernetes.io/projected/9eeffb69-5654-4e1d-ae21-580a5a235246-kube-api-access-9bnpc\") pod \"nova-cell1-cell-mapping-bpll2\" (UID: \"9eeffb69-5654-4e1d-ae21-580a5a235246\") " pod="openstack/nova-cell1-cell-mapping-bpll2" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.360717 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eeffb69-5654-4e1d-ae21-580a5a235246-config-data\") pod \"nova-cell1-cell-mapping-bpll2\" (UID: \"9eeffb69-5654-4e1d-ae21-580a5a235246\") " pod="openstack/nova-cell1-cell-mapping-bpll2" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.465032 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eeffb69-5654-4e1d-ae21-580a5a235246-config-data\") pod \"nova-cell1-cell-mapping-bpll2\" (UID: \"9eeffb69-5654-4e1d-ae21-580a5a235246\") " pod="openstack/nova-cell1-cell-mapping-bpll2" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.465153 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf2z2\" (UniqueName: \"kubernetes.io/projected/6b750c35-3d5d-48e8-a08f-4bc97b63ee81-kube-api-access-wf2z2\") pod \"redhat-operators-g4jgb\" (UID: \"6b750c35-3d5d-48e8-a08f-4bc97b63ee81\") " pod="openshift-marketplace/redhat-operators-g4jgb" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.465253 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b750c35-3d5d-48e8-a08f-4bc97b63ee81-catalog-content\") pod \"redhat-operators-g4jgb\" (UID: \"6b750c35-3d5d-48e8-a08f-4bc97b63ee81\") " pod="openshift-marketplace/redhat-operators-g4jgb" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.465327 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eeffb69-5654-4e1d-ae21-580a5a235246-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bpll2\" (UID: \"9eeffb69-5654-4e1d-ae21-580a5a235246\") " pod="openstack/nova-cell1-cell-mapping-bpll2" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.465449 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eeffb69-5654-4e1d-ae21-580a5a235246-scripts\") pod \"nova-cell1-cell-mapping-bpll2\" (UID: \"9eeffb69-5654-4e1d-ae21-580a5a235246\") " pod="openstack/nova-cell1-cell-mapping-bpll2" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.465523 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b750c35-3d5d-48e8-a08f-4bc97b63ee81-utilities\") pod \"redhat-operators-g4jgb\" (UID: \"6b750c35-3d5d-48e8-a08f-4bc97b63ee81\") " pod="openshift-marketplace/redhat-operators-g4jgb" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.465545 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bnpc\" (UniqueName: \"kubernetes.io/projected/9eeffb69-5654-4e1d-ae21-580a5a235246-kube-api-access-9bnpc\") pod \"nova-cell1-cell-mapping-bpll2\" (UID: \"9eeffb69-5654-4e1d-ae21-580a5a235246\") " pod="openstack/nova-cell1-cell-mapping-bpll2" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.472195 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eeffb69-5654-4e1d-ae21-580a5a235246-scripts\") pod \"nova-cell1-cell-mapping-bpll2\" (UID: \"9eeffb69-5654-4e1d-ae21-580a5a235246\") " pod="openstack/nova-cell1-cell-mapping-bpll2" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.473242 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eeffb69-5654-4e1d-ae21-580a5a235246-config-data\") pod \"nova-cell1-cell-mapping-bpll2\" (UID: \"9eeffb69-5654-4e1d-ae21-580a5a235246\") " pod="openstack/nova-cell1-cell-mapping-bpll2" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.482077 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eeffb69-5654-4e1d-ae21-580a5a235246-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bpll2\" (UID: \"9eeffb69-5654-4e1d-ae21-580a5a235246\") " pod="openstack/nova-cell1-cell-mapping-bpll2" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.486015 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bnpc\" (UniqueName: \"kubernetes.io/projected/9eeffb69-5654-4e1d-ae21-580a5a235246-kube-api-access-9bnpc\") pod \"nova-cell1-cell-mapping-bpll2\" (UID: \"9eeffb69-5654-4e1d-ae21-580a5a235246\") " pod="openstack/nova-cell1-cell-mapping-bpll2" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.567462 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b750c35-3d5d-48e8-a08f-4bc97b63ee81-utilities\") pod \"redhat-operators-g4jgb\" (UID: \"6b750c35-3d5d-48e8-a08f-4bc97b63ee81\") " pod="openshift-marketplace/redhat-operators-g4jgb" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.567542 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf2z2\" (UniqueName: \"kubernetes.io/projected/6b750c35-3d5d-48e8-a08f-4bc97b63ee81-kube-api-access-wf2z2\") pod \"redhat-operators-g4jgb\" (UID: \"6b750c35-3d5d-48e8-a08f-4bc97b63ee81\") " pod="openshift-marketplace/redhat-operators-g4jgb" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.567608 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b750c35-3d5d-48e8-a08f-4bc97b63ee81-catalog-content\") pod \"redhat-operators-g4jgb\" (UID: \"6b750c35-3d5d-48e8-a08f-4bc97b63ee81\") " pod="openshift-marketplace/redhat-operators-g4jgb" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.568025 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b750c35-3d5d-48e8-a08f-4bc97b63ee81-utilities\") pod \"redhat-operators-g4jgb\" (UID: \"6b750c35-3d5d-48e8-a08f-4bc97b63ee81\") " pod="openshift-marketplace/redhat-operators-g4jgb" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.568077 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b750c35-3d5d-48e8-a08f-4bc97b63ee81-catalog-content\") pod \"redhat-operators-g4jgb\" (UID: \"6b750c35-3d5d-48e8-a08f-4bc97b63ee81\") " pod="openshift-marketplace/redhat-operators-g4jgb" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.587570 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf2z2\" (UniqueName: \"kubernetes.io/projected/6b750c35-3d5d-48e8-a08f-4bc97b63ee81-kube-api-access-wf2z2\") pod \"redhat-operators-g4jgb\" (UID: \"6b750c35-3d5d-48e8-a08f-4bc97b63ee81\") " pod="openshift-marketplace/redhat-operators-g4jgb" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.650576 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bpll2" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.694047 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4jgb" Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.906732 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0","Type":"ContainerStarted","Data":"b0d478e010846ab5a51efb0e69eca260cc0db9f38af91d508549bcc5efc957cc"} Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.907208 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0","Type":"ContainerStarted","Data":"c019728c4bd97f273acbd3e64866f54e85d86263074ec8113f92841283b49cdd"} Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.907222 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0","Type":"ContainerStarted","Data":"7ebba074891d3c2bc942588681ac7a81b9a42456dd3be5c1495ab3cf058a02cd"} Jan 23 16:40:48 crc kubenswrapper[4718]: I0123 16:40:48.933264 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.933241011 podStartE2EDuration="1.933241011s" podCreationTimestamp="2026-01-23 16:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:40:48.927048663 +0000 UTC m=+1450.074290654" watchObservedRunningTime="2026-01-23 16:40:48.933241011 +0000 UTC m=+1450.080483002" Jan 23 16:40:49 crc kubenswrapper[4718]: I0123 16:40:49.165399 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37157014-8ad6-4d39-b8ad-376689a6340b" path="/var/lib/kubelet/pods/37157014-8ad6-4d39-b8ad-376689a6340b/volumes" Jan 23 16:40:49 crc kubenswrapper[4718]: I0123 16:40:49.238880 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bpll2"] Jan 23 16:40:49 crc kubenswrapper[4718]: I0123 16:40:49.365840 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4jgb"] Jan 23 16:40:49 crc kubenswrapper[4718]: I0123 16:40:49.917188 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bpll2" event={"ID":"9eeffb69-5654-4e1d-ae21-580a5a235246","Type":"ContainerStarted","Data":"74d28042ae3100b01691674a2f9167b7ddfdcf96c1f35077b06301e619483e51"} Jan 23 16:40:49 crc kubenswrapper[4718]: I0123 16:40:49.917695 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bpll2" event={"ID":"9eeffb69-5654-4e1d-ae21-580a5a235246","Type":"ContainerStarted","Data":"c3e58185b64bad0fe0c8938a31adc33bc17cc24f19f334807862bbb9adfc1710"} Jan 23 16:40:49 crc kubenswrapper[4718]: I0123 16:40:49.919294 4718 generic.go:334] "Generic (PLEG): container finished" podID="6b750c35-3d5d-48e8-a08f-4bc97b63ee81" containerID="e17124e24c3ffc2fdb8aaaa5e29a8d8f8f8934e1f5b94adfe91dc2d70eb7fdbe" exitCode=0 Jan 23 16:40:49 crc kubenswrapper[4718]: I0123 16:40:49.919347 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4jgb" event={"ID":"6b750c35-3d5d-48e8-a08f-4bc97b63ee81","Type":"ContainerDied","Data":"e17124e24c3ffc2fdb8aaaa5e29a8d8f8f8934e1f5b94adfe91dc2d70eb7fdbe"} Jan 23 16:40:49 crc kubenswrapper[4718]: I0123 16:40:49.919390 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4jgb" event={"ID":"6b750c35-3d5d-48e8-a08f-4bc97b63ee81","Type":"ContainerStarted","Data":"9547e162a25bbfae0293479d39a49cc415f56607677815cd9c2db7ce73637c2f"} Jan 23 16:40:49 crc kubenswrapper[4718]: I0123 16:40:49.944177 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-bpll2" podStartSLOduration=1.9441575119999999 podStartE2EDuration="1.944157512s" podCreationTimestamp="2026-01-23 16:40:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:40:49.932529107 +0000 UTC m=+1451.079771098" watchObservedRunningTime="2026-01-23 16:40:49.944157512 +0000 UTC m=+1451.091399503" Jan 23 16:40:50 crc kubenswrapper[4718]: I0123 16:40:50.304457 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:40:50 crc kubenswrapper[4718]: I0123 16:40:50.407549 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-5fb9b"] Jan 23 16:40:50 crc kubenswrapper[4718]: I0123 16:40:50.409828 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" podUID="942e0bee-95f9-4e8d-8157-8353e9b80b41" containerName="dnsmasq-dns" containerID="cri-o://2f6f27753e47c23eb8d29fccc51d61064100eb7a9fbbf9f0928c0f6c98422cba" gracePeriod=10 Jan 23 16:40:50 crc kubenswrapper[4718]: I0123 16:40:50.963435 4718 generic.go:334] "Generic (PLEG): container finished" podID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerID="5c51516eefe825ee4abe8e5f439fbeb62479dcf0886883fed90323f6d7eb0159" exitCode=0 Jan 23 16:40:50 crc kubenswrapper[4718]: I0123 16:40:50.964132 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc2fa59e-66c4-44b0-941a-3294b39afee9","Type":"ContainerDied","Data":"5c51516eefe825ee4abe8e5f439fbeb62479dcf0886883fed90323f6d7eb0159"} Jan 23 16:40:50 crc kubenswrapper[4718]: I0123 16:40:50.966224 4718 generic.go:334] "Generic (PLEG): container finished" podID="942e0bee-95f9-4e8d-8157-8353e9b80b41" containerID="2f6f27753e47c23eb8d29fccc51d61064100eb7a9fbbf9f0928c0f6c98422cba" exitCode=0 Jan 23 16:40:50 crc kubenswrapper[4718]: I0123 16:40:50.967198 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" event={"ID":"942e0bee-95f9-4e8d-8157-8353e9b80b41","Type":"ContainerDied","Data":"2f6f27753e47c23eb8d29fccc51d61064100eb7a9fbbf9f0928c0f6c98422cba"} Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.085797 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.256609 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-dns-svc\") pod \"942e0bee-95f9-4e8d-8157-8353e9b80b41\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.257184 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-ovsdbserver-nb\") pod \"942e0bee-95f9-4e8d-8157-8353e9b80b41\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.257221 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnxh2\" (UniqueName: \"kubernetes.io/projected/942e0bee-95f9-4e8d-8157-8353e9b80b41-kube-api-access-gnxh2\") pod \"942e0bee-95f9-4e8d-8157-8353e9b80b41\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.257248 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-ovsdbserver-sb\") pod \"942e0bee-95f9-4e8d-8157-8353e9b80b41\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.257347 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-dns-swift-storage-0\") pod \"942e0bee-95f9-4e8d-8157-8353e9b80b41\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.257457 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-config\") pod \"942e0bee-95f9-4e8d-8157-8353e9b80b41\" (UID: \"942e0bee-95f9-4e8d-8157-8353e9b80b41\") " Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.268900 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/942e0bee-95f9-4e8d-8157-8353e9b80b41-kube-api-access-gnxh2" (OuterVolumeSpecName: "kube-api-access-gnxh2") pod "942e0bee-95f9-4e8d-8157-8353e9b80b41" (UID: "942e0bee-95f9-4e8d-8157-8353e9b80b41"). InnerVolumeSpecName "kube-api-access-gnxh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.365802 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnxh2\" (UniqueName: \"kubernetes.io/projected/942e0bee-95f9-4e8d-8157-8353e9b80b41-kube-api-access-gnxh2\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.379478 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-config" (OuterVolumeSpecName: "config") pod "942e0bee-95f9-4e8d-8157-8353e9b80b41" (UID: "942e0bee-95f9-4e8d-8157-8353e9b80b41"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.447142 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "942e0bee-95f9-4e8d-8157-8353e9b80b41" (UID: "942e0bee-95f9-4e8d-8157-8353e9b80b41"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.449247 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "942e0bee-95f9-4e8d-8157-8353e9b80b41" (UID: "942e0bee-95f9-4e8d-8157-8353e9b80b41"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.458456 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "942e0bee-95f9-4e8d-8157-8353e9b80b41" (UID: "942e0bee-95f9-4e8d-8157-8353e9b80b41"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.468695 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.468725 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.468735 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.468770 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.483759 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "942e0bee-95f9-4e8d-8157-8353e9b80b41" (UID: "942e0bee-95f9-4e8d-8157-8353e9b80b41"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.555194 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.573702 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/942e0bee-95f9-4e8d-8157-8353e9b80b41-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.675480 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc2fa59e-66c4-44b0-941a-3294b39afee9-run-httpd\") pod \"bc2fa59e-66c4-44b0-941a-3294b39afee9\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.675679 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbjs7\" (UniqueName: \"kubernetes.io/projected/bc2fa59e-66c4-44b0-941a-3294b39afee9-kube-api-access-gbjs7\") pod \"bc2fa59e-66c4-44b0-941a-3294b39afee9\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.675741 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-scripts\") pod \"bc2fa59e-66c4-44b0-941a-3294b39afee9\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.675760 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc2fa59e-66c4-44b0-941a-3294b39afee9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bc2fa59e-66c4-44b0-941a-3294b39afee9" (UID: "bc2fa59e-66c4-44b0-941a-3294b39afee9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.675877 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc2fa59e-66c4-44b0-941a-3294b39afee9-log-httpd\") pod \"bc2fa59e-66c4-44b0-941a-3294b39afee9\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.675971 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-sg-core-conf-yaml\") pod \"bc2fa59e-66c4-44b0-941a-3294b39afee9\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.676154 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-config-data\") pod \"bc2fa59e-66c4-44b0-941a-3294b39afee9\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.676192 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-combined-ca-bundle\") pod \"bc2fa59e-66c4-44b0-941a-3294b39afee9\" (UID: \"bc2fa59e-66c4-44b0-941a-3294b39afee9\") " Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.676288 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc2fa59e-66c4-44b0-941a-3294b39afee9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bc2fa59e-66c4-44b0-941a-3294b39afee9" (UID: "bc2fa59e-66c4-44b0-941a-3294b39afee9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.676978 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc2fa59e-66c4-44b0-941a-3294b39afee9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.677000 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc2fa59e-66c4-44b0-941a-3294b39afee9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.680328 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2fa59e-66c4-44b0-941a-3294b39afee9-kube-api-access-gbjs7" (OuterVolumeSpecName: "kube-api-access-gbjs7") pod "bc2fa59e-66c4-44b0-941a-3294b39afee9" (UID: "bc2fa59e-66c4-44b0-941a-3294b39afee9"). InnerVolumeSpecName "kube-api-access-gbjs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.683978 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-scripts" (OuterVolumeSpecName: "scripts") pod "bc2fa59e-66c4-44b0-941a-3294b39afee9" (UID: "bc2fa59e-66c4-44b0-941a-3294b39afee9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.723846 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bc2fa59e-66c4-44b0-941a-3294b39afee9" (UID: "bc2fa59e-66c4-44b0-941a-3294b39afee9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.779516 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.779545 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbjs7\" (UniqueName: \"kubernetes.io/projected/bc2fa59e-66c4-44b0-941a-3294b39afee9-kube-api-access-gbjs7\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.779555 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.801301 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc2fa59e-66c4-44b0-941a-3294b39afee9" (UID: "bc2fa59e-66c4-44b0-941a-3294b39afee9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.834071 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-config-data" (OuterVolumeSpecName: "config-data") pod "bc2fa59e-66c4-44b0-941a-3294b39afee9" (UID: "bc2fa59e-66c4-44b0-941a-3294b39afee9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.882308 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.882341 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2fa59e-66c4-44b0-941a-3294b39afee9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.980541 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" event={"ID":"942e0bee-95f9-4e8d-8157-8353e9b80b41","Type":"ContainerDied","Data":"0c11dcff7bd9af1037f756d93604a20e1a78c62964dedd67cc63e328b00e5b18"} Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.980571 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-5fb9b" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.980589 4718 scope.go:117] "RemoveContainer" containerID="2f6f27753e47c23eb8d29fccc51d61064100eb7a9fbbf9f0928c0f6c98422cba" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.988464 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.988466 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc2fa59e-66c4-44b0-941a-3294b39afee9","Type":"ContainerDied","Data":"f9f6b300b6acb9d938bcd8e217089232a68ac9134f2a144234decb8d38cb70ae"} Jan 23 16:40:51 crc kubenswrapper[4718]: I0123 16:40:51.996424 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4jgb" event={"ID":"6b750c35-3d5d-48e8-a08f-4bc97b63ee81","Type":"ContainerStarted","Data":"28e3102e96017487359179c192e0f601be72b31a332fbc3e311f3189f4f8f32a"} Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.020952 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-5fb9b"] Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.032081 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-5fb9b"] Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.045847 4718 scope.go:117] "RemoveContainer" containerID="1698d98ae401ad23d007232fc7d0b7c837a29204e3d91afa2bbd491010ebb3d2" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.089882 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.102354 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.115692 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:40:52 crc kubenswrapper[4718]: E0123 16:40:52.118141 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="942e0bee-95f9-4e8d-8157-8353e9b80b41" containerName="init" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.118179 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="942e0bee-95f9-4e8d-8157-8353e9b80b41" containerName="init" Jan 23 16:40:52 crc kubenswrapper[4718]: E0123 16:40:52.118225 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerName="ceilometer-notification-agent" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.118238 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerName="ceilometer-notification-agent" Jan 23 16:40:52 crc kubenswrapper[4718]: E0123 16:40:52.118274 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerName="proxy-httpd" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.118287 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerName="proxy-httpd" Jan 23 16:40:52 crc kubenswrapper[4718]: E0123 16:40:52.118314 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="942e0bee-95f9-4e8d-8157-8353e9b80b41" containerName="dnsmasq-dns" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.118328 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="942e0bee-95f9-4e8d-8157-8353e9b80b41" containerName="dnsmasq-dns" Jan 23 16:40:52 crc kubenswrapper[4718]: E0123 16:40:52.118345 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerName="sg-core" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.118355 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerName="sg-core" Jan 23 16:40:52 crc kubenswrapper[4718]: E0123 16:40:52.118471 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerName="ceilometer-central-agent" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.118516 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerName="ceilometer-central-agent" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.119083 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerName="ceilometer-central-agent" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.119133 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="942e0bee-95f9-4e8d-8157-8353e9b80b41" containerName="dnsmasq-dns" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.119174 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerName="ceilometer-notification-agent" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.119190 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerName="sg-core" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.119206 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2fa59e-66c4-44b0-941a-3294b39afee9" containerName="proxy-httpd" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.122686 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.124794 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.126708 4718 scope.go:117] "RemoveContainer" containerID="1c81664e7d2a0be63987229b6547db7ae27eb68ead50a930d1d111e85807846f" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.126920 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.127474 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.189002 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-config-data\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.189221 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.189380 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47e154a7-be17-400e-b268-384d47e31bf7-log-httpd\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.189542 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4c6x\" (UniqueName: \"kubernetes.io/projected/47e154a7-be17-400e-b268-384d47e31bf7-kube-api-access-p4c6x\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.189590 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-scripts\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.189616 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.189927 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47e154a7-be17-400e-b268-384d47e31bf7-run-httpd\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.248370 4718 scope.go:117] "RemoveContainer" containerID="06d21284d53f67a8975cbd93c86ca1862ea2dfc571dc809fb31fc12671283750" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.267540 4718 scope.go:117] "RemoveContainer" containerID="bf3ef4f65fa05c64fc07f59712ae444cd96c5a08b76af09a9e1af4aae4aa9acb" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.291934 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47e154a7-be17-400e-b268-384d47e31bf7-run-httpd\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.292055 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-config-data\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.292107 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.292157 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47e154a7-be17-400e-b268-384d47e31bf7-log-httpd\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.292212 4718 scope.go:117] "RemoveContainer" containerID="5c51516eefe825ee4abe8e5f439fbeb62479dcf0886883fed90323f6d7eb0159" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.292241 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4c6x\" (UniqueName: \"kubernetes.io/projected/47e154a7-be17-400e-b268-384d47e31bf7-kube-api-access-p4c6x\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.292272 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-scripts\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.292288 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.292550 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47e154a7-be17-400e-b268-384d47e31bf7-run-httpd\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.292790 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47e154a7-be17-400e-b268-384d47e31bf7-log-httpd\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.297684 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.297732 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-scripts\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.297909 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-config-data\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.302192 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.309804 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4c6x\" (UniqueName: \"kubernetes.io/projected/47e154a7-be17-400e-b268-384d47e31bf7-kube-api-access-p4c6x\") pod \"ceilometer-0\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " pod="openstack/ceilometer-0" Jan 23 16:40:52 crc kubenswrapper[4718]: I0123 16:40:52.547568 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:40:53 crc kubenswrapper[4718]: I0123 16:40:53.091660 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:40:53 crc kubenswrapper[4718]: I0123 16:40:53.154271 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="942e0bee-95f9-4e8d-8157-8353e9b80b41" path="/var/lib/kubelet/pods/942e0bee-95f9-4e8d-8157-8353e9b80b41/volumes" Jan 23 16:40:53 crc kubenswrapper[4718]: I0123 16:40:53.155338 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2fa59e-66c4-44b0-941a-3294b39afee9" path="/var/lib/kubelet/pods/bc2fa59e-66c4-44b0-941a-3294b39afee9/volumes" Jan 23 16:40:54 crc kubenswrapper[4718]: I0123 16:40:54.029292 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47e154a7-be17-400e-b268-384d47e31bf7","Type":"ContainerStarted","Data":"928a5a33195525866a21ad20ef6032af1e920a04ad2c031092adc9044f5a2d8f"} Jan 23 16:40:56 crc kubenswrapper[4718]: I0123 16:40:56.090910 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47e154a7-be17-400e-b268-384d47e31bf7","Type":"ContainerStarted","Data":"ab686f5b82d2864b8328c21499fc44f758d76099f003392f77359c9ae187bb01"} Jan 23 16:40:57 crc kubenswrapper[4718]: I0123 16:40:57.106293 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47e154a7-be17-400e-b268-384d47e31bf7","Type":"ContainerStarted","Data":"3dd469f42748dfe81a915a49f871c71ccb21da5e26f11c1e82f8b68c565a0b18"} Jan 23 16:40:57 crc kubenswrapper[4718]: I0123 16:40:57.108601 4718 generic.go:334] "Generic (PLEG): container finished" podID="9eeffb69-5654-4e1d-ae21-580a5a235246" containerID="74d28042ae3100b01691674a2f9167b7ddfdcf96c1f35077b06301e619483e51" exitCode=0 Jan 23 16:40:57 crc kubenswrapper[4718]: I0123 16:40:57.108675 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bpll2" event={"ID":"9eeffb69-5654-4e1d-ae21-580a5a235246","Type":"ContainerDied","Data":"74d28042ae3100b01691674a2f9167b7ddfdcf96c1f35077b06301e619483e51"} Jan 23 16:40:57 crc kubenswrapper[4718]: I0123 16:40:57.111805 4718 generic.go:334] "Generic (PLEG): container finished" podID="6b750c35-3d5d-48e8-a08f-4bc97b63ee81" containerID="28e3102e96017487359179c192e0f601be72b31a332fbc3e311f3189f4f8f32a" exitCode=0 Jan 23 16:40:57 crc kubenswrapper[4718]: I0123 16:40:57.111838 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4jgb" event={"ID":"6b750c35-3d5d-48e8-a08f-4bc97b63ee81","Type":"ContainerDied","Data":"28e3102e96017487359179c192e0f601be72b31a332fbc3e311f3189f4f8f32a"} Jan 23 16:40:57 crc kubenswrapper[4718]: I0123 16:40:57.155924 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 16:40:57 crc kubenswrapper[4718]: I0123 16:40:57.155989 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 16:40:57 crc kubenswrapper[4718]: I0123 16:40:57.165019 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 16:40:57 crc kubenswrapper[4718]: I0123 16:40:57.165383 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 16:40:57 crc kubenswrapper[4718]: I0123 16:40:57.580769 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 16:40:57 crc kubenswrapper[4718]: I0123 16:40:57.581482 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.127323 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47e154a7-be17-400e-b268-384d47e31bf7","Type":"ContainerStarted","Data":"157a6f569f802ad83239aa415e01878de84fd9941e17170133523f88cc64ad34"} Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.129601 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4jgb" event={"ID":"6b750c35-3d5d-48e8-a08f-4bc97b63ee81","Type":"ContainerStarted","Data":"a4252443fb75e14b0f1f427c57c3548a12bdae61d9b7f723b9a93eace7928353"} Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.163314 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g4jgb" podStartSLOduration=2.5486606800000002 podStartE2EDuration="10.163293674s" podCreationTimestamp="2026-01-23 16:40:48 +0000 UTC" firstStartedPulling="2026-01-23 16:40:49.921673334 +0000 UTC m=+1451.068915315" lastFinishedPulling="2026-01-23 16:40:57.536306318 +0000 UTC m=+1458.683548309" observedRunningTime="2026-01-23 16:40:58.151692329 +0000 UTC m=+1459.298934320" watchObservedRunningTime="2026-01-23 16:40:58.163293674 +0000 UTC m=+1459.310535665" Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.595688 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.10:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.596313 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.10:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.615368 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bpll2" Jan 23 16:40:58 crc kubenswrapper[4718]: E0123 16:40:58.687066 4718 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6de06d58332aca390ddf70d1abee4538c3ca75625dbc32acfdecc247bdfdd8ba/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6de06d58332aca390ddf70d1abee4538c3ca75625dbc32acfdecc247bdfdd8ba/diff: no such file or directory, extraDiskErr: Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.694413 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g4jgb" Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.694640 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g4jgb" Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.695381 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bnpc\" (UniqueName: \"kubernetes.io/projected/9eeffb69-5654-4e1d-ae21-580a5a235246-kube-api-access-9bnpc\") pod \"9eeffb69-5654-4e1d-ae21-580a5a235246\" (UID: \"9eeffb69-5654-4e1d-ae21-580a5a235246\") " Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.695486 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eeffb69-5654-4e1d-ae21-580a5a235246-combined-ca-bundle\") pod \"9eeffb69-5654-4e1d-ae21-580a5a235246\" (UID: \"9eeffb69-5654-4e1d-ae21-580a5a235246\") " Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.695656 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eeffb69-5654-4e1d-ae21-580a5a235246-config-data\") pod \"9eeffb69-5654-4e1d-ae21-580a5a235246\" (UID: \"9eeffb69-5654-4e1d-ae21-580a5a235246\") " Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.695800 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eeffb69-5654-4e1d-ae21-580a5a235246-scripts\") pod \"9eeffb69-5654-4e1d-ae21-580a5a235246\" (UID: \"9eeffb69-5654-4e1d-ae21-580a5a235246\") " Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.707971 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eeffb69-5654-4e1d-ae21-580a5a235246-kube-api-access-9bnpc" (OuterVolumeSpecName: "kube-api-access-9bnpc") pod "9eeffb69-5654-4e1d-ae21-580a5a235246" (UID: "9eeffb69-5654-4e1d-ae21-580a5a235246"). InnerVolumeSpecName "kube-api-access-9bnpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.723769 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eeffb69-5654-4e1d-ae21-580a5a235246-scripts" (OuterVolumeSpecName: "scripts") pod "9eeffb69-5654-4e1d-ae21-580a5a235246" (UID: "9eeffb69-5654-4e1d-ae21-580a5a235246"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.753776 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eeffb69-5654-4e1d-ae21-580a5a235246-config-data" (OuterVolumeSpecName: "config-data") pod "9eeffb69-5654-4e1d-ae21-580a5a235246" (UID: "9eeffb69-5654-4e1d-ae21-580a5a235246"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.764118 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eeffb69-5654-4e1d-ae21-580a5a235246-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9eeffb69-5654-4e1d-ae21-580a5a235246" (UID: "9eeffb69-5654-4e1d-ae21-580a5a235246"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.801276 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eeffb69-5654-4e1d-ae21-580a5a235246-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.801317 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eeffb69-5654-4e1d-ae21-580a5a235246-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.801350 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bnpc\" (UniqueName: \"kubernetes.io/projected/9eeffb69-5654-4e1d-ae21-580a5a235246-kube-api-access-9bnpc\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.801364 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eeffb69-5654-4e1d-ae21-580a5a235246-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.875300 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:40:58 crc kubenswrapper[4718]: I0123 16:40:58.875356 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:40:59 crc kubenswrapper[4718]: I0123 16:40:59.157391 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bpll2" Jan 23 16:40:59 crc kubenswrapper[4718]: I0123 16:40:59.159284 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bpll2" event={"ID":"9eeffb69-5654-4e1d-ae21-580a5a235246","Type":"ContainerDied","Data":"c3e58185b64bad0fe0c8938a31adc33bc17cc24f19f334807862bbb9adfc1710"} Jan 23 16:40:59 crc kubenswrapper[4718]: I0123 16:40:59.159324 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3e58185b64bad0fe0c8938a31adc33bc17cc24f19f334807862bbb9adfc1710" Jan 23 16:40:59 crc kubenswrapper[4718]: I0123 16:40:59.344468 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 16:40:59 crc kubenswrapper[4718]: I0123 16:40:59.349852 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" containerName="nova-api-log" containerID="cri-o://c019728c4bd97f273acbd3e64866f54e85d86263074ec8113f92841283b49cdd" gracePeriod=30 Jan 23 16:40:59 crc kubenswrapper[4718]: I0123 16:40:59.350034 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" containerName="nova-api-api" containerID="cri-o://b0d478e010846ab5a51efb0e69eca260cc0db9f38af91d508549bcc5efc957cc" gracePeriod=30 Jan 23 16:40:59 crc kubenswrapper[4718]: I0123 16:40:59.366026 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 16:40:59 crc kubenswrapper[4718]: I0123 16:40:59.366712 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8ad97822-bebe-4e44-97c5-92732ed20095" containerName="nova-scheduler-scheduler" containerID="cri-o://c99c6b045e6e7b2a371ac86b6189d273b2bba994f442187bc7513637f9a65a14" gracePeriod=30 Jan 23 16:40:59 crc kubenswrapper[4718]: I0123 16:40:59.424096 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 16:40:59 crc kubenswrapper[4718]: I0123 16:40:59.757078 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g4jgb" podUID="6b750c35-3d5d-48e8-a08f-4bc97b63ee81" containerName="registry-server" probeResult="failure" output=< Jan 23 16:40:59 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Jan 23 16:40:59 crc kubenswrapper[4718]: > Jan 23 16:41:00 crc kubenswrapper[4718]: E0123 16:41:00.163572 4718 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1241c9fdf6635231c282ad51caaa75c58abf572eed243ac971a08a94ecc270ae/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1241c9fdf6635231c282ad51caaa75c58abf572eed243ac971a08a94ecc270ae/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_dnsmasq-dns-568d7fd7cf-5fb9b_942e0bee-95f9-4e8d-8157-8353e9b80b41/dnsmasq-dns/0.log" to get inode usage: stat /var/log/pods/openstack_dnsmasq-dns-568d7fd7cf-5fb9b_942e0bee-95f9-4e8d-8157-8353e9b80b41/dnsmasq-dns/0.log: no such file or directory Jan 23 16:41:00 crc kubenswrapper[4718]: I0123 16:41:00.181558 4718 generic.go:334] "Generic (PLEG): container finished" podID="a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" containerID="c019728c4bd97f273acbd3e64866f54e85d86263074ec8113f92841283b49cdd" exitCode=143 Jan 23 16:41:00 crc kubenswrapper[4718]: I0123 16:41:00.181678 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0","Type":"ContainerDied","Data":"c019728c4bd97f273acbd3e64866f54e85d86263074ec8113f92841283b49cdd"} Jan 23 16:41:00 crc kubenswrapper[4718]: I0123 16:41:00.185957 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47e154a7-be17-400e-b268-384d47e31bf7","Type":"ContainerStarted","Data":"6410293a9c41f8f1392481e4fb5425342129881d0542fbd978d3b6f15b8b7070"} Jan 23 16:41:00 crc kubenswrapper[4718]: I0123 16:41:00.186243 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7bd9c89a-130a-408e-b86f-5a82e75d3ae2" containerName="nova-metadata-log" containerID="cri-o://5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84" gracePeriod=30 Jan 23 16:41:00 crc kubenswrapper[4718]: I0123 16:41:00.186405 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7bd9c89a-130a-408e-b86f-5a82e75d3ae2" containerName="nova-metadata-metadata" containerID="cri-o://e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3" gracePeriod=30 Jan 23 16:41:00 crc kubenswrapper[4718]: I0123 16:41:00.186935 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 16:41:00 crc kubenswrapper[4718]: I0123 16:41:00.223794 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.62253425 podStartE2EDuration="8.22377368s" podCreationTimestamp="2026-01-23 16:40:52 +0000 UTC" firstStartedPulling="2026-01-23 16:40:53.57910219 +0000 UTC m=+1454.726344181" lastFinishedPulling="2026-01-23 16:40:59.18034162 +0000 UTC m=+1460.327583611" observedRunningTime="2026-01-23 16:41:00.218555689 +0000 UTC m=+1461.365797680" watchObservedRunningTime="2026-01-23 16:41:00.22377368 +0000 UTC m=+1461.371015661" Jan 23 16:41:01 crc kubenswrapper[4718]: I0123 16:41:01.210905 4718 generic.go:334] "Generic (PLEG): container finished" podID="7bd9c89a-130a-408e-b86f-5a82e75d3ae2" containerID="5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84" exitCode=143 Jan 23 16:41:01 crc kubenswrapper[4718]: I0123 16:41:01.210980 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7bd9c89a-130a-408e-b86f-5a82e75d3ae2","Type":"ContainerDied","Data":"5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84"} Jan 23 16:41:01 crc kubenswrapper[4718]: I0123 16:41:01.213051 4718 generic.go:334] "Generic (PLEG): container finished" podID="8ad97822-bebe-4e44-97c5-92732ed20095" containerID="c99c6b045e6e7b2a371ac86b6189d273b2bba994f442187bc7513637f9a65a14" exitCode=0 Jan 23 16:41:01 crc kubenswrapper[4718]: I0123 16:41:01.213193 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8ad97822-bebe-4e44-97c5-92732ed20095","Type":"ContainerDied","Data":"c99c6b045e6e7b2a371ac86b6189d273b2bba994f442187bc7513637f9a65a14"} Jan 23 16:41:01 crc kubenswrapper[4718]: I0123 16:41:01.427364 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 16:41:01 crc kubenswrapper[4718]: I0123 16:41:01.476901 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad97822-bebe-4e44-97c5-92732ed20095-combined-ca-bundle\") pod \"8ad97822-bebe-4e44-97c5-92732ed20095\" (UID: \"8ad97822-bebe-4e44-97c5-92732ed20095\") " Jan 23 16:41:01 crc kubenswrapper[4718]: I0123 16:41:01.477167 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htg6n\" (UniqueName: \"kubernetes.io/projected/8ad97822-bebe-4e44-97c5-92732ed20095-kube-api-access-htg6n\") pod \"8ad97822-bebe-4e44-97c5-92732ed20095\" (UID: \"8ad97822-bebe-4e44-97c5-92732ed20095\") " Jan 23 16:41:01 crc kubenswrapper[4718]: I0123 16:41:01.477499 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad97822-bebe-4e44-97c5-92732ed20095-config-data\") pod \"8ad97822-bebe-4e44-97c5-92732ed20095\" (UID: \"8ad97822-bebe-4e44-97c5-92732ed20095\") " Jan 23 16:41:01 crc kubenswrapper[4718]: I0123 16:41:01.513968 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ad97822-bebe-4e44-97c5-92732ed20095-kube-api-access-htg6n" (OuterVolumeSpecName: "kube-api-access-htg6n") pod "8ad97822-bebe-4e44-97c5-92732ed20095" (UID: "8ad97822-bebe-4e44-97c5-92732ed20095"). InnerVolumeSpecName "kube-api-access-htg6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:41:01 crc kubenswrapper[4718]: I0123 16:41:01.536857 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ad97822-bebe-4e44-97c5-92732ed20095-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ad97822-bebe-4e44-97c5-92732ed20095" (UID: "8ad97822-bebe-4e44-97c5-92732ed20095"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:41:01 crc kubenswrapper[4718]: I0123 16:41:01.553853 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ad97822-bebe-4e44-97c5-92732ed20095-config-data" (OuterVolumeSpecName: "config-data") pod "8ad97822-bebe-4e44-97c5-92732ed20095" (UID: "8ad97822-bebe-4e44-97c5-92732ed20095"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:41:01 crc kubenswrapper[4718]: I0123 16:41:01.581005 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad97822-bebe-4e44-97c5-92732ed20095-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:01 crc kubenswrapper[4718]: I0123 16:41:01.581045 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad97822-bebe-4e44-97c5-92732ed20095-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:01 crc kubenswrapper[4718]: I0123 16:41:01.581057 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htg6n\" (UniqueName: \"kubernetes.io/projected/8ad97822-bebe-4e44-97c5-92732ed20095-kube-api-access-htg6n\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.229594 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8ad97822-bebe-4e44-97c5-92732ed20095","Type":"ContainerDied","Data":"2dbba3883e7b0fb9d6e31e6fe97105f571a91e2729ccea20bf0efedb488f0d34"} Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.229725 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.230072 4718 scope.go:117] "RemoveContainer" containerID="c99c6b045e6e7b2a371ac86b6189d273b2bba994f442187bc7513637f9a65a14" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.276415 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.293180 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.309675 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 16:41:02 crc kubenswrapper[4718]: E0123 16:41:02.310549 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ad97822-bebe-4e44-97c5-92732ed20095" containerName="nova-scheduler-scheduler" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.310569 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad97822-bebe-4e44-97c5-92732ed20095" containerName="nova-scheduler-scheduler" Jan 23 16:41:02 crc kubenswrapper[4718]: E0123 16:41:02.310587 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eeffb69-5654-4e1d-ae21-580a5a235246" containerName="nova-manage" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.310596 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eeffb69-5654-4e1d-ae21-580a5a235246" containerName="nova-manage" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.311003 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ad97822-bebe-4e44-97c5-92732ed20095" containerName="nova-scheduler-scheduler" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.311041 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eeffb69-5654-4e1d-ae21-580a5a235246" containerName="nova-manage" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.312183 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.314361 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.354827 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.404333 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx5j9\" (UniqueName: \"kubernetes.io/projected/f249685b-e052-4a6c-b34e-28fa3fe0610a-kube-api-access-lx5j9\") pod \"nova-scheduler-0\" (UID: \"f249685b-e052-4a6c-b34e-28fa3fe0610a\") " pod="openstack/nova-scheduler-0" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.404555 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f249685b-e052-4a6c-b34e-28fa3fe0610a-config-data\") pod \"nova-scheduler-0\" (UID: \"f249685b-e052-4a6c-b34e-28fa3fe0610a\") " pod="openstack/nova-scheduler-0" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.404932 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f249685b-e052-4a6c-b34e-28fa3fe0610a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f249685b-e052-4a6c-b34e-28fa3fe0610a\") " pod="openstack/nova-scheduler-0" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.507398 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f249685b-e052-4a6c-b34e-28fa3fe0610a-config-data\") pod \"nova-scheduler-0\" (UID: \"f249685b-e052-4a6c-b34e-28fa3fe0610a\") " pod="openstack/nova-scheduler-0" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.507542 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f249685b-e052-4a6c-b34e-28fa3fe0610a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f249685b-e052-4a6c-b34e-28fa3fe0610a\") " pod="openstack/nova-scheduler-0" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.507644 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx5j9\" (UniqueName: \"kubernetes.io/projected/f249685b-e052-4a6c-b34e-28fa3fe0610a-kube-api-access-lx5j9\") pod \"nova-scheduler-0\" (UID: \"f249685b-e052-4a6c-b34e-28fa3fe0610a\") " pod="openstack/nova-scheduler-0" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.513402 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f249685b-e052-4a6c-b34e-28fa3fe0610a-config-data\") pod \"nova-scheduler-0\" (UID: \"f249685b-e052-4a6c-b34e-28fa3fe0610a\") " pod="openstack/nova-scheduler-0" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.520810 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f249685b-e052-4a6c-b34e-28fa3fe0610a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f249685b-e052-4a6c-b34e-28fa3fe0610a\") " pod="openstack/nova-scheduler-0" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.527513 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx5j9\" (UniqueName: \"kubernetes.io/projected/f249685b-e052-4a6c-b34e-28fa3fe0610a-kube-api-access-lx5j9\") pod \"nova-scheduler-0\" (UID: \"f249685b-e052-4a6c-b34e-28fa3fe0610a\") " pod="openstack/nova-scheduler-0" Jan 23 16:41:02 crc kubenswrapper[4718]: I0123 16:41:02.640780 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 16:41:03 crc kubenswrapper[4718]: I0123 16:41:03.157709 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ad97822-bebe-4e44-97c5-92732ed20095" path="/var/lib/kubelet/pods/8ad97822-bebe-4e44-97c5-92732ed20095/volumes" Jan 23 16:41:03 crc kubenswrapper[4718]: I0123 16:41:03.197798 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 16:41:03 crc kubenswrapper[4718]: I0123 16:41:03.269046 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f249685b-e052-4a6c-b34e-28fa3fe0610a","Type":"ContainerStarted","Data":"8e3e939e6b98623e944eb3b2088ec4f9c7ac6b30f50486c405986cf79d644a95"} Jan 23 16:41:03 crc kubenswrapper[4718]: I0123 16:41:03.595253 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7bd9c89a-130a-408e-b86f-5a82e75d3ae2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": read tcp 10.217.0.2:37056->10.217.1.6:8775: read: connection reset by peer" Jan 23 16:41:03 crc kubenswrapper[4718]: I0123 16:41:03.595253 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7bd9c89a-130a-408e-b86f-5a82e75d3ae2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": read tcp 10.217.0.2:37040->10.217.1.6:8775: read: connection reset by peer" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.236012 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.320298 4718 generic.go:334] "Generic (PLEG): container finished" podID="7bd9c89a-130a-408e-b86f-5a82e75d3ae2" containerID="e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3" exitCode=0 Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.320392 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7bd9c89a-130a-408e-b86f-5a82e75d3ae2","Type":"ContainerDied","Data":"e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3"} Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.320420 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7bd9c89a-130a-408e-b86f-5a82e75d3ae2","Type":"ContainerDied","Data":"9a0e2ed9511c4f3075899067961236d406d0d28d8c737c671bb03ae073f785c3"} Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.320437 4718 scope.go:117] "RemoveContainer" containerID="e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.320610 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.332970 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f249685b-e052-4a6c-b34e-28fa3fe0610a","Type":"ContainerStarted","Data":"e6978b27b7d0aab3a0b4054aa10b11b2f7ccce3820f6db436467f24a9b210f4d"} Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.358554 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-config-data\") pod \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.358639 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-nova-metadata-tls-certs\") pod \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.358751 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-logs\") pod \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.359482 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-logs" (OuterVolumeSpecName: "logs") pod "7bd9c89a-130a-408e-b86f-5a82e75d3ae2" (UID: "7bd9c89a-130a-408e-b86f-5a82e75d3ae2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.359798 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-combined-ca-bundle\") pod \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.359864 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpks9\" (UniqueName: \"kubernetes.io/projected/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-kube-api-access-tpks9\") pod \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\" (UID: \"7bd9c89a-130a-408e-b86f-5a82e75d3ae2\") " Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.360468 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-logs\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.364841 4718 scope.go:117] "RemoveContainer" containerID="5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.371447 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.371419775 podStartE2EDuration="2.371419775s" podCreationTimestamp="2026-01-23 16:41:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:41:04.365082663 +0000 UTC m=+1465.512324654" watchObservedRunningTime="2026-01-23 16:41:04.371419775 +0000 UTC m=+1465.518661766" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.377163 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-kube-api-access-tpks9" (OuterVolumeSpecName: "kube-api-access-tpks9") pod "7bd9c89a-130a-408e-b86f-5a82e75d3ae2" (UID: "7bd9c89a-130a-408e-b86f-5a82e75d3ae2"). InnerVolumeSpecName "kube-api-access-tpks9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.409521 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bd9c89a-130a-408e-b86f-5a82e75d3ae2" (UID: "7bd9c89a-130a-408e-b86f-5a82e75d3ae2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.410956 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-config-data" (OuterVolumeSpecName: "config-data") pod "7bd9c89a-130a-408e-b86f-5a82e75d3ae2" (UID: "7bd9c89a-130a-408e-b86f-5a82e75d3ae2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.433575 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7bd9c89a-130a-408e-b86f-5a82e75d3ae2" (UID: "7bd9c89a-130a-408e-b86f-5a82e75d3ae2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.462381 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.462776 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpks9\" (UniqueName: \"kubernetes.io/projected/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-kube-api-access-tpks9\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.462914 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.462933 4718 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bd9c89a-130a-408e-b86f-5a82e75d3ae2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.523986 4718 scope.go:117] "RemoveContainer" containerID="e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3" Jan 23 16:41:04 crc kubenswrapper[4718]: E0123 16:41:04.525184 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3\": container with ID starting with e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3 not found: ID does not exist" containerID="e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.525211 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3"} err="failed to get container status \"e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3\": rpc error: code = NotFound desc = could not find container \"e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3\": container with ID starting with e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3 not found: ID does not exist" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.525235 4718 scope.go:117] "RemoveContainer" containerID="5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84" Jan 23 16:41:04 crc kubenswrapper[4718]: E0123 16:41:04.525588 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84\": container with ID starting with 5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84 not found: ID does not exist" containerID="5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.525606 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84"} err="failed to get container status \"5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84\": rpc error: code = NotFound desc = could not find container \"5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84\": container with ID starting with 5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84 not found: ID does not exist" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.675897 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.694793 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.709905 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 16:41:04 crc kubenswrapper[4718]: E0123 16:41:04.710534 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd9c89a-130a-408e-b86f-5a82e75d3ae2" containerName="nova-metadata-log" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.710558 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd9c89a-130a-408e-b86f-5a82e75d3ae2" containerName="nova-metadata-log" Jan 23 16:41:04 crc kubenswrapper[4718]: E0123 16:41:04.710627 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd9c89a-130a-408e-b86f-5a82e75d3ae2" containerName="nova-metadata-metadata" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.710652 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd9c89a-130a-408e-b86f-5a82e75d3ae2" containerName="nova-metadata-metadata" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.710954 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd9c89a-130a-408e-b86f-5a82e75d3ae2" containerName="nova-metadata-log" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.711001 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd9c89a-130a-408e-b86f-5a82e75d3ae2" containerName="nova-metadata-metadata" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.712768 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.715908 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.715909 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.749709 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.870532 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c5d1a7-2493-4399-9a20-247f71a1c754-config-data\") pod \"nova-metadata-0\" (UID: \"82c5d1a7-2493-4399-9a20-247f71a1c754\") " pod="openstack/nova-metadata-0" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.870793 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kzq4\" (UniqueName: \"kubernetes.io/projected/82c5d1a7-2493-4399-9a20-247f71a1c754-kube-api-access-6kzq4\") pod \"nova-metadata-0\" (UID: \"82c5d1a7-2493-4399-9a20-247f71a1c754\") " pod="openstack/nova-metadata-0" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.870949 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c5d1a7-2493-4399-9a20-247f71a1c754-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"82c5d1a7-2493-4399-9a20-247f71a1c754\") " pod="openstack/nova-metadata-0" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.871041 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c5d1a7-2493-4399-9a20-247f71a1c754-logs\") pod \"nova-metadata-0\" (UID: \"82c5d1a7-2493-4399-9a20-247f71a1c754\") " pod="openstack/nova-metadata-0" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.871271 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c5d1a7-2493-4399-9a20-247f71a1c754-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"82c5d1a7-2493-4399-9a20-247f71a1c754\") " pod="openstack/nova-metadata-0" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.972700 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c5d1a7-2493-4399-9a20-247f71a1c754-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"82c5d1a7-2493-4399-9a20-247f71a1c754\") " pod="openstack/nova-metadata-0" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.972794 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c5d1a7-2493-4399-9a20-247f71a1c754-config-data\") pod \"nova-metadata-0\" (UID: \"82c5d1a7-2493-4399-9a20-247f71a1c754\") " pod="openstack/nova-metadata-0" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.972819 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kzq4\" (UniqueName: \"kubernetes.io/projected/82c5d1a7-2493-4399-9a20-247f71a1c754-kube-api-access-6kzq4\") pod \"nova-metadata-0\" (UID: \"82c5d1a7-2493-4399-9a20-247f71a1c754\") " pod="openstack/nova-metadata-0" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.972914 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c5d1a7-2493-4399-9a20-247f71a1c754-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"82c5d1a7-2493-4399-9a20-247f71a1c754\") " pod="openstack/nova-metadata-0" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.972945 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c5d1a7-2493-4399-9a20-247f71a1c754-logs\") pod \"nova-metadata-0\" (UID: \"82c5d1a7-2493-4399-9a20-247f71a1c754\") " pod="openstack/nova-metadata-0" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.973398 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c5d1a7-2493-4399-9a20-247f71a1c754-logs\") pod \"nova-metadata-0\" (UID: \"82c5d1a7-2493-4399-9a20-247f71a1c754\") " pod="openstack/nova-metadata-0" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.978319 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c5d1a7-2493-4399-9a20-247f71a1c754-config-data\") pod \"nova-metadata-0\" (UID: \"82c5d1a7-2493-4399-9a20-247f71a1c754\") " pod="openstack/nova-metadata-0" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.983673 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c5d1a7-2493-4399-9a20-247f71a1c754-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"82c5d1a7-2493-4399-9a20-247f71a1c754\") " pod="openstack/nova-metadata-0" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.988340 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c5d1a7-2493-4399-9a20-247f71a1c754-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"82c5d1a7-2493-4399-9a20-247f71a1c754\") " pod="openstack/nova-metadata-0" Jan 23 16:41:04 crc kubenswrapper[4718]: I0123 16:41:04.989393 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kzq4\" (UniqueName: \"kubernetes.io/projected/82c5d1a7-2493-4399-9a20-247f71a1c754-kube-api-access-6kzq4\") pod \"nova-metadata-0\" (UID: \"82c5d1a7-2493-4399-9a20-247f71a1c754\") " pod="openstack/nova-metadata-0" Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.052293 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.160325 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bd9c89a-130a-408e-b86f-5a82e75d3ae2" path="/var/lib/kubelet/pods/7bd9c89a-130a-408e-b86f-5a82e75d3ae2/volumes" Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.359598 4718 generic.go:334] "Generic (PLEG): container finished" podID="a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" containerID="b0d478e010846ab5a51efb0e69eca260cc0db9f38af91d508549bcc5efc957cc" exitCode=0 Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.359671 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0","Type":"ContainerDied","Data":"b0d478e010846ab5a51efb0e69eca260cc0db9f38af91d508549bcc5efc957cc"} Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.360128 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0","Type":"ContainerDied","Data":"7ebba074891d3c2bc942588681ac7a81b9a42456dd3be5c1495ab3cf058a02cd"} Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.360147 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ebba074891d3c2bc942588681ac7a81b9a42456dd3be5c1495ab3cf058a02cd" Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.367467 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.491622 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-config-data\") pod \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.491708 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bfs9\" (UniqueName: \"kubernetes.io/projected/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-kube-api-access-2bfs9\") pod \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.491792 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-logs\") pod \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.491941 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-internal-tls-certs\") pod \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.491966 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-public-tls-certs\") pod \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.492115 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-combined-ca-bundle\") pod \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\" (UID: \"a10752e9-c5c9-4a45-b3b1-7f4d51400ae0\") " Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.493577 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-logs" (OuterVolumeSpecName: "logs") pod "a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" (UID: "a10752e9-c5c9-4a45-b3b1-7f4d51400ae0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.497980 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-kube-api-access-2bfs9" (OuterVolumeSpecName: "kube-api-access-2bfs9") pod "a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" (UID: "a10752e9-c5c9-4a45-b3b1-7f4d51400ae0"). InnerVolumeSpecName "kube-api-access-2bfs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.526270 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-config-data" (OuterVolumeSpecName: "config-data") pod "a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" (UID: "a10752e9-c5c9-4a45-b3b1-7f4d51400ae0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.526403 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" (UID: "a10752e9-c5c9-4a45-b3b1-7f4d51400ae0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.558271 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" (UID: "a10752e9-c5c9-4a45-b3b1-7f4d51400ae0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.566964 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" (UID: "a10752e9-c5c9-4a45-b3b1-7f4d51400ae0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.595148 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-logs\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.595467 4718 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.595530 4718 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.595593 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.595700 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.595753 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bfs9\" (UniqueName: \"kubernetes.io/projected/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0-kube-api-access-2bfs9\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:05 crc kubenswrapper[4718]: I0123 16:41:05.597275 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.389070 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.390918 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82c5d1a7-2493-4399-9a20-247f71a1c754","Type":"ContainerStarted","Data":"6242cd945247f97d97291bb8f7da6082b2fcfb843c508b11a32f71a1a2e3e094"} Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.390968 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82c5d1a7-2493-4399-9a20-247f71a1c754","Type":"ContainerStarted","Data":"34aa01e07f7586314d216d1f82d538d04eba2bc3fb22ee87983d8ca82112ac78"} Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.390981 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82c5d1a7-2493-4399-9a20-247f71a1c754","Type":"ContainerStarted","Data":"bcf79742b565164472492aab9d7a9f49cda76e2d83fe38ecfa2ef908382850f3"} Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.410802 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.410787171 podStartE2EDuration="2.410787171s" podCreationTimestamp="2026-01-23 16:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:41:06.408878919 +0000 UTC m=+1467.556120910" watchObservedRunningTime="2026-01-23 16:41:06.410787171 +0000 UTC m=+1467.558029162" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.458863 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.483738 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.494968 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 16:41:06 crc kubenswrapper[4718]: E0123 16:41:06.495461 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" containerName="nova-api-api" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.495477 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" containerName="nova-api-api" Jan 23 16:41:06 crc kubenswrapper[4718]: E0123 16:41:06.495500 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" containerName="nova-api-log" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.495508 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" containerName="nova-api-log" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.495771 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" containerName="nova-api-log" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.495797 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" containerName="nova-api-api" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.497096 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.502125 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.502363 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.502993 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.518184 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.624075 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353b7e73-13e9-4989-8f55-5dedebe8e92a-config-data\") pod \"nova-api-0\" (UID: \"353b7e73-13e9-4989-8f55-5dedebe8e92a\") " pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.624259 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/353b7e73-13e9-4989-8f55-5dedebe8e92a-public-tls-certs\") pod \"nova-api-0\" (UID: \"353b7e73-13e9-4989-8f55-5dedebe8e92a\") " pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.624908 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/353b7e73-13e9-4989-8f55-5dedebe8e92a-logs\") pod \"nova-api-0\" (UID: \"353b7e73-13e9-4989-8f55-5dedebe8e92a\") " pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.625002 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/353b7e73-13e9-4989-8f55-5dedebe8e92a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"353b7e73-13e9-4989-8f55-5dedebe8e92a\") " pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.625194 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353b7e73-13e9-4989-8f55-5dedebe8e92a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"353b7e73-13e9-4989-8f55-5dedebe8e92a\") " pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.625341 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2r2j\" (UniqueName: \"kubernetes.io/projected/353b7e73-13e9-4989-8f55-5dedebe8e92a-kube-api-access-x2r2j\") pod \"nova-api-0\" (UID: \"353b7e73-13e9-4989-8f55-5dedebe8e92a\") " pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.728728 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353b7e73-13e9-4989-8f55-5dedebe8e92a-config-data\") pod \"nova-api-0\" (UID: \"353b7e73-13e9-4989-8f55-5dedebe8e92a\") " pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.728817 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/353b7e73-13e9-4989-8f55-5dedebe8e92a-public-tls-certs\") pod \"nova-api-0\" (UID: \"353b7e73-13e9-4989-8f55-5dedebe8e92a\") " pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.728953 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/353b7e73-13e9-4989-8f55-5dedebe8e92a-logs\") pod \"nova-api-0\" (UID: \"353b7e73-13e9-4989-8f55-5dedebe8e92a\") " pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.728983 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/353b7e73-13e9-4989-8f55-5dedebe8e92a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"353b7e73-13e9-4989-8f55-5dedebe8e92a\") " pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.729039 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353b7e73-13e9-4989-8f55-5dedebe8e92a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"353b7e73-13e9-4989-8f55-5dedebe8e92a\") " pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.729092 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2r2j\" (UniqueName: \"kubernetes.io/projected/353b7e73-13e9-4989-8f55-5dedebe8e92a-kube-api-access-x2r2j\") pod \"nova-api-0\" (UID: \"353b7e73-13e9-4989-8f55-5dedebe8e92a\") " pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.729446 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/353b7e73-13e9-4989-8f55-5dedebe8e92a-logs\") pod \"nova-api-0\" (UID: \"353b7e73-13e9-4989-8f55-5dedebe8e92a\") " pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.735722 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/353b7e73-13e9-4989-8f55-5dedebe8e92a-public-tls-certs\") pod \"nova-api-0\" (UID: \"353b7e73-13e9-4989-8f55-5dedebe8e92a\") " pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.736609 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353b7e73-13e9-4989-8f55-5dedebe8e92a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"353b7e73-13e9-4989-8f55-5dedebe8e92a\") " pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.738389 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/353b7e73-13e9-4989-8f55-5dedebe8e92a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"353b7e73-13e9-4989-8f55-5dedebe8e92a\") " pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.739219 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353b7e73-13e9-4989-8f55-5dedebe8e92a-config-data\") pod \"nova-api-0\" (UID: \"353b7e73-13e9-4989-8f55-5dedebe8e92a\") " pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.764043 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2r2j\" (UniqueName: \"kubernetes.io/projected/353b7e73-13e9-4989-8f55-5dedebe8e92a-kube-api-access-x2r2j\") pod \"nova-api-0\" (UID: \"353b7e73-13e9-4989-8f55-5dedebe8e92a\") " pod="openstack/nova-api-0" Jan 23 16:41:06 crc kubenswrapper[4718]: I0123 16:41:06.823573 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 16:41:07 crc kubenswrapper[4718]: I0123 16:41:07.160866 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a10752e9-c5c9-4a45-b3b1-7f4d51400ae0" path="/var/lib/kubelet/pods/a10752e9-c5c9-4a45-b3b1-7f4d51400ae0/volumes" Jan 23 16:41:07 crc kubenswrapper[4718]: I0123 16:41:07.374070 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 16:41:07 crc kubenswrapper[4718]: I0123 16:41:07.421936 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"353b7e73-13e9-4989-8f55-5dedebe8e92a","Type":"ContainerStarted","Data":"29e6b64c1fb458a8779f91ca00254d2ce3c9c3b8fbd67c447e86309b14a0ca49"} Jan 23 16:41:07 crc kubenswrapper[4718]: I0123 16:41:07.640897 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 16:41:07 crc kubenswrapper[4718]: W0123 16:41:07.726486 4718 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2fa59e_66c4_44b0_941a_3294b39afee9.slice/crio-conmon-06d21284d53f67a8975cbd93c86ca1862ea2dfc571dc809fb31fc12671283750.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2fa59e_66c4_44b0_941a_3294b39afee9.slice/crio-conmon-06d21284d53f67a8975cbd93c86ca1862ea2dfc571dc809fb31fc12671283750.scope: no such file or directory Jan 23 16:41:07 crc kubenswrapper[4718]: W0123 16:41:07.726584 4718 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2fa59e_66c4_44b0_941a_3294b39afee9.slice/crio-06d21284d53f67a8975cbd93c86ca1862ea2dfc571dc809fb31fc12671283750.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2fa59e_66c4_44b0_941a_3294b39afee9.slice/crio-06d21284d53f67a8975cbd93c86ca1862ea2dfc571dc809fb31fc12671283750.scope: no such file or directory Jan 23 16:41:07 crc kubenswrapper[4718]: W0123 16:41:07.726611 4718 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2fa59e_66c4_44b0_941a_3294b39afee9.slice/crio-conmon-1c81664e7d2a0be63987229b6547db7ae27eb68ead50a930d1d111e85807846f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2fa59e_66c4_44b0_941a_3294b39afee9.slice/crio-conmon-1c81664e7d2a0be63987229b6547db7ae27eb68ead50a930d1d111e85807846f.scope: no such file or directory Jan 23 16:41:07 crc kubenswrapper[4718]: W0123 16:41:07.726647 4718 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2fa59e_66c4_44b0_941a_3294b39afee9.slice/crio-1c81664e7d2a0be63987229b6547db7ae27eb68ead50a930d1d111e85807846f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2fa59e_66c4_44b0_941a_3294b39afee9.slice/crio-1c81664e7d2a0be63987229b6547db7ae27eb68ead50a930d1d111e85807846f.scope: no such file or directory Jan 23 16:41:07 crc kubenswrapper[4718]: W0123 16:41:07.731712 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2fa59e_66c4_44b0_941a_3294b39afee9.slice/crio-f9f6b300b6acb9d938bcd8e217089232a68ac9134f2a144234decb8d38cb70ae WatchSource:0}: Error finding container f9f6b300b6acb9d938bcd8e217089232a68ac9134f2a144234decb8d38cb70ae: Status 404 returned error can't find the container with id f9f6b300b6acb9d938bcd8e217089232a68ac9134f2a144234decb8d38cb70ae Jan 23 16:41:07 crc kubenswrapper[4718]: W0123 16:41:07.734405 4718 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda10752e9_c5c9_4a45_b3b1_7f4d51400ae0.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda10752e9_c5c9_4a45_b3b1_7f4d51400ae0.slice: no such file or directory Jan 23 16:41:07 crc kubenswrapper[4718]: W0123 16:41:07.734505 4718 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eeffb69_5654_4e1d_ae21_580a5a235246.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eeffb69_5654_4e1d_ae21_580a5a235246.slice: no such file or directory Jan 23 16:41:07 crc kubenswrapper[4718]: W0123 16:41:07.740949 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2fa59e_66c4_44b0_941a_3294b39afee9.slice/crio-5c51516eefe825ee4abe8e5f439fbeb62479dcf0886883fed90323f6d7eb0159.scope WatchSource:0}: Error finding container 5c51516eefe825ee4abe8e5f439fbeb62479dcf0886883fed90323f6d7eb0159: Status 404 returned error can't find the container with id 5c51516eefe825ee4abe8e5f439fbeb62479dcf0886883fed90323f6d7eb0159 Jan 23 16:41:07 crc kubenswrapper[4718]: W0123 16:41:07.747887 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2fa59e_66c4_44b0_941a_3294b39afee9.slice/crio-bf3ef4f65fa05c64fc07f59712ae444cd96c5a08b76af09a9e1af4aae4aa9acb.scope WatchSource:0}: Error finding container bf3ef4f65fa05c64fc07f59712ae444cd96c5a08b76af09a9e1af4aae4aa9acb: Status 404 returned error can't find the container with id bf3ef4f65fa05c64fc07f59712ae444cd96c5a08b76af09a9e1af4aae4aa9acb Jan 23 16:41:07 crc kubenswrapper[4718]: E0123 16:41:07.872260 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-conmon-d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-conmon-5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-conmon-e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1043069_738a_4c98_ba8b_6b5b5bbd0856.slice/crio-82411c3a7a39b0d26d2cce33c444dbcac0e38dcca6fb44fe943457e3cbf5677c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod942e0bee_95f9_4e8d_8157_8353e9b80b41.slice/crio-0c11dcff7bd9af1037f756d93604a20e1a78c62964dedd67cc63e328b00e5b18\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ad97822_bebe_4e44_97c5_92732ed20095.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-9a0e2ed9511c4f3075899067961236d406d0d28d8c737c671bb03ae073f785c3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-401dd062ad1f06df974cd2b39d30799b1421c2ab5df4481e24ef1805e7b60031\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ad97822_bebe_4e44_97c5_92732ed20095.slice/crio-c99c6b045e6e7b2a371ac86b6189d273b2bba994f442187bc7513637f9a65a14.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod942e0bee_95f9_4e8d_8157_8353e9b80b41.slice/crio-2f6f27753e47c23eb8d29fccc51d61064100eb7a9fbbf9f0928c0f6c98422cba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1043069_738a_4c98_ba8b_6b5b5bbd0856.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod942e0bee_95f9_4e8d_8157_8353e9b80b41.slice/crio-conmon-2f6f27753e47c23eb8d29fccc51d61064100eb7a9fbbf9f0928c0f6c98422cba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55dfb1b9_ceec_4c77_aa2a_b5dffa296e39.slice/crio-conmon-087dd79f11afbe138aff708ced1f692ecfd669c3a1881fc98b2ca82ee4470e83.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-conmon-4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86.scope\": RecentStats: unable to find data in memory cache]" Jan 23 16:41:07 crc kubenswrapper[4718]: E0123 16:41:07.872351 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1043069_738a_4c98_ba8b_6b5b5bbd0856.slice/crio-82411c3a7a39b0d26d2cce33c444dbcac0e38dcca6fb44fe943457e3cbf5677c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod942e0bee_95f9_4e8d_8157_8353e9b80b41.slice/crio-0c11dcff7bd9af1037f756d93604a20e1a78c62964dedd67cc63e328b00e5b18\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-conmon-d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-conmon-4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1043069_738a_4c98_ba8b_6b5b5bbd0856.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-conmon-e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ad97822_bebe_4e44_97c5_92732ed20095.slice/crio-c99c6b045e6e7b2a371ac86b6189d273b2bba994f442187bc7513637f9a65a14.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod942e0bee_95f9_4e8d_8157_8353e9b80b41.slice/crio-2f6f27753e47c23eb8d29fccc51d61064100eb7a9fbbf9f0928c0f6c98422cba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-9a0e2ed9511c4f3075899067961236d406d0d28d8c737c671bb03ae073f785c3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod942e0bee_95f9_4e8d_8157_8353e9b80b41.slice/crio-conmon-2f6f27753e47c23eb8d29fccc51d61064100eb7a9fbbf9f0928c0f6c98422cba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55dfb1b9_ceec_4c77_aa2a_b5dffa296e39.slice/crio-conmon-087dd79f11afbe138aff708ced1f692ecfd669c3a1881fc98b2ca82ee4470e83.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod942e0bee_95f9_4e8d_8157_8353e9b80b41.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-401dd062ad1f06df974cd2b39d30799b1421c2ab5df4481e24ef1805e7b60031\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ad97822_bebe_4e44_97c5_92732ed20095.slice\": RecentStats: unable to find data in memory cache]" Jan 23 16:41:07 crc kubenswrapper[4718]: E0123 16:41:07.873902 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-401dd062ad1f06df974cd2b39d30799b1421c2ab5df4481e24ef1805e7b60031\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1043069_738a_4c98_ba8b_6b5b5bbd0856.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-9a0e2ed9511c4f3075899067961236d406d0d28d8c737c671bb03ae073f785c3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ad97822_bebe_4e44_97c5_92732ed20095.slice/crio-conmon-c99c6b045e6e7b2a371ac86b6189d273b2bba994f442187bc7513637f9a65a14.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-conmon-e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod942e0bee_95f9_4e8d_8157_8353e9b80b41.slice/crio-0c11dcff7bd9af1037f756d93604a20e1a78c62964dedd67cc63e328b00e5b18\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-conmon-5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55dfb1b9_ceec_4c77_aa2a_b5dffa296e39.slice/crio-conmon-087dd79f11afbe138aff708ced1f692ecfd669c3a1881fc98b2ca82ee4470e83.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod942e0bee_95f9_4e8d_8157_8353e9b80b41.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-conmon-4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod942e0bee_95f9_4e8d_8157_8353e9b80b41.slice/crio-conmon-2f6f27753e47c23eb8d29fccc51d61064100eb7a9fbbf9f0928c0f6c98422cba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ad97822_bebe_4e44_97c5_92732ed20095.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod942e0bee_95f9_4e8d_8157_8353e9b80b41.slice/crio-2f6f27753e47c23eb8d29fccc51d61064100eb7a9fbbf9f0928c0f6c98422cba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-conmon-d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55dfb1b9_ceec_4c77_aa2a_b5dffa296e39.slice/crio-087dd79f11afbe138aff708ced1f692ecfd669c3a1881fc98b2ca82ee4470e83.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ad97822_bebe_4e44_97c5_92732ed20095.slice/crio-c99c6b045e6e7b2a371ac86b6189d273b2bba994f442187bc7513637f9a65a14.scope\": RecentStats: unable to find data in memory cache]" Jan 23 16:41:07 crc kubenswrapper[4718]: E0123 16:41:07.873911 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod942e0bee_95f9_4e8d_8157_8353e9b80b41.slice/crio-2f6f27753e47c23eb8d29fccc51d61064100eb7a9fbbf9f0928c0f6c98422cba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-conmon-5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod942e0bee_95f9_4e8d_8157_8353e9b80b41.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod942e0bee_95f9_4e8d_8157_8353e9b80b41.slice/crio-0c11dcff7bd9af1037f756d93604a20e1a78c62964dedd67cc63e328b00e5b18\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55dfb1b9_ceec_4c77_aa2a_b5dffa296e39.slice/crio-conmon-087dd79f11afbe138aff708ced1f692ecfd669c3a1881fc98b2ca82ee4470e83.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ad97822_bebe_4e44_97c5_92732ed20095.slice/crio-conmon-c99c6b045e6e7b2a371ac86b6189d273b2bba994f442187bc7513637f9a65a14.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod942e0bee_95f9_4e8d_8157_8353e9b80b41.slice/crio-conmon-2f6f27753e47c23eb8d29fccc51d61064100eb7a9fbbf9f0928c0f6c98422cba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1043069_738a_4c98_ba8b_6b5b5bbd0856.slice/crio-82411c3a7a39b0d26d2cce33c444dbcac0e38dcca6fb44fe943457e3cbf5677c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ad97822_bebe_4e44_97c5_92732ed20095.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-conmon-4fb0653160b8527fa47726e3735bc6114e1d90e010736a4d73ee24ca8f407e86.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-401dd062ad1f06df974cd2b39d30799b1421c2ab5df4481e24ef1805e7b60031\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-5e8711ca6a355fea79cb61008ad0885505f5dcf1df2be592ffcedbd04f0feb84.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1043069_738a_4c98_ba8b_6b5b5bbd0856.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-9a0e2ed9511c4f3075899067961236d406d0d28d8c737c671bb03ae073f785c3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-conmon-d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37157014_8ad6_4d39_b8ad_376689a6340b.slice/crio-d9615a828ae9ce07e37d37a640e8aed5f80997ceb8e32c99a88df408842afca6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-conmon-e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice\": RecentStats: unable to find data in memory cache]" Jan 23 16:41:07 crc kubenswrapper[4718]: E0123 16:41:07.875563 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-9a0e2ed9511c4f3075899067961236d406d0d28d8c737c671bb03ae073f785c3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ad97822_bebe_4e44_97c5_92732ed20095.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd9c89a_130a_408e_b86f_5a82e75d3ae2.slice/crio-e3284ed9803aab5608bafcfab0328cc8cc1be705d4532994523d8836647c0dd3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55dfb1b9_ceec_4c77_aa2a_b5dffa296e39.slice/crio-087dd79f11afbe138aff708ced1f692ecfd669c3a1881fc98b2ca82ee4470e83.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55dfb1b9_ceec_4c77_aa2a_b5dffa296e39.slice/crio-conmon-087dd79f11afbe138aff708ced1f692ecfd669c3a1881fc98b2ca82ee4470e83.scope\": RecentStats: unable to find data in memory cache]" Jan 23 16:41:08 crc kubenswrapper[4718]: I0123 16:41:08.443874 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"353b7e73-13e9-4989-8f55-5dedebe8e92a","Type":"ContainerStarted","Data":"b903fd80250c4692f5bc9650e5dd5a2b5888a15b2e274e6e1e903a452b4e7312"} Jan 23 16:41:08 crc kubenswrapper[4718]: I0123 16:41:08.444251 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"353b7e73-13e9-4989-8f55-5dedebe8e92a","Type":"ContainerStarted","Data":"5518d45ce569d63e2bd723ca83c102200815a4f15fa3059dc5cca67101781a02"} Jan 23 16:41:08 crc kubenswrapper[4718]: I0123 16:41:08.453044 4718 generic.go:334] "Generic (PLEG): container finished" podID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerID="087dd79f11afbe138aff708ced1f692ecfd669c3a1881fc98b2ca82ee4470e83" exitCode=137 Jan 23 16:41:08 crc kubenswrapper[4718]: I0123 16:41:08.453086 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39","Type":"ContainerDied","Data":"087dd79f11afbe138aff708ced1f692ecfd669c3a1881fc98b2ca82ee4470e83"} Jan 23 16:41:08 crc kubenswrapper[4718]: I0123 16:41:08.478836 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.478814532 podStartE2EDuration="2.478814532s" podCreationTimestamp="2026-01-23 16:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:41:08.471417682 +0000 UTC m=+1469.618659683" watchObservedRunningTime="2026-01-23 16:41:08.478814532 +0000 UTC m=+1469.626056523" Jan 23 16:41:08 crc kubenswrapper[4718]: I0123 16:41:08.783147 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 23 16:41:08 crc kubenswrapper[4718]: I0123 16:41:08.791965 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g4jgb" Jan 23 16:41:08 crc kubenswrapper[4718]: I0123 16:41:08.890558 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g4jgb" Jan 23 16:41:08 crc kubenswrapper[4718]: I0123 16:41:08.907839 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-combined-ca-bundle\") pod \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\" (UID: \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\") " Jan 23 16:41:08 crc kubenswrapper[4718]: I0123 16:41:08.907952 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-config-data\") pod \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\" (UID: \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\") " Jan 23 16:41:08 crc kubenswrapper[4718]: I0123 16:41:08.908008 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8shrk\" (UniqueName: \"kubernetes.io/projected/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-kube-api-access-8shrk\") pod \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\" (UID: \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\") " Jan 23 16:41:08 crc kubenswrapper[4718]: I0123 16:41:08.908061 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-scripts\") pod \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\" (UID: \"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39\") " Jan 23 16:41:08 crc kubenswrapper[4718]: I0123 16:41:08.921069 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-kube-api-access-8shrk" (OuterVolumeSpecName: "kube-api-access-8shrk") pod "55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" (UID: "55dfb1b9-ceec-4c77-aa2a-b5dffa296e39"). InnerVolumeSpecName "kube-api-access-8shrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:41:08 crc kubenswrapper[4718]: I0123 16:41:08.936462 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-scripts" (OuterVolumeSpecName: "scripts") pod "55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" (UID: "55dfb1b9-ceec-4c77-aa2a-b5dffa296e39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.012520 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8shrk\" (UniqueName: \"kubernetes.io/projected/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-kube-api-access-8shrk\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.012552 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.041436 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-config-data" (OuterVolumeSpecName: "config-data") pod "55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" (UID: "55dfb1b9-ceec-4c77-aa2a-b5dffa296e39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.073438 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4jgb"] Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.096204 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" (UID: "55dfb1b9-ceec-4c77-aa2a-b5dffa296e39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.114944 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.114979 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.474979 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"55dfb1b9-ceec-4c77-aa2a-b5dffa296e39","Type":"ContainerDied","Data":"d5d67788daaf5ea7a6959102cce503ccc8729db1ecf5cb0fccbf7edf21d220e9"} Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.475015 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.475052 4718 scope.go:117] "RemoveContainer" containerID="087dd79f11afbe138aff708ced1f692ecfd669c3a1881fc98b2ca82ee4470e83" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.522111 4718 scope.go:117] "RemoveContainer" containerID="fc3e09a5d0cd0b1e0638d2be800cdf3bf51f6217fe0b23aedafdad338a2103a4" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.528943 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.565755 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.585302 4718 scope.go:117] "RemoveContainer" containerID="3877a95e9a8ddb4fda5030cfe94bf4526e156896babc9c4092e4b48f42623858" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.594156 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 23 16:41:09 crc kubenswrapper[4718]: E0123 16:41:09.595742 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerName="aodh-notifier" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.595772 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerName="aodh-notifier" Jan 23 16:41:09 crc kubenswrapper[4718]: E0123 16:41:09.595826 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerName="aodh-api" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.595837 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerName="aodh-api" Jan 23 16:41:09 crc kubenswrapper[4718]: E0123 16:41:09.595857 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerName="aodh-listener" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.595865 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerName="aodh-listener" Jan 23 16:41:09 crc kubenswrapper[4718]: E0123 16:41:09.595884 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerName="aodh-evaluator" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.595892 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerName="aodh-evaluator" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.598236 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerName="aodh-evaluator" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.598273 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerName="aodh-api" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.598306 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerName="aodh-notifier" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.598326 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" containerName="aodh-listener" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.602488 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.605615 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.606077 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.606253 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.606503 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-tjnkn" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.606719 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.614898 4718 scope.go:117] "RemoveContainer" containerID="eb53aff40f16b82c9d447fcdc41f3d24a0f1a9be06e5cb15d9768819afb6426c" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.634377 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.732589 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-config-data\") pod \"aodh-0\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.733316 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.733363 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-public-tls-certs\") pod \"aodh-0\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.733480 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-internal-tls-certs\") pod \"aodh-0\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.733517 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvht4\" (UniqueName: \"kubernetes.io/projected/c3b4f02b-601b-4c1d-a9df-a488ce538760-kube-api-access-kvht4\") pod \"aodh-0\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.733600 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-scripts\") pod \"aodh-0\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.835861 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-config-data\") pod \"aodh-0\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.835934 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.835964 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-public-tls-certs\") pod \"aodh-0\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.837085 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-internal-tls-certs\") pod \"aodh-0\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.837119 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvht4\" (UniqueName: \"kubernetes.io/projected/c3b4f02b-601b-4c1d-a9df-a488ce538760-kube-api-access-kvht4\") pod \"aodh-0\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.837171 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-scripts\") pod \"aodh-0\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.842376 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-public-tls-certs\") pod \"aodh-0\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.842424 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.844132 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-internal-tls-certs\") pod \"aodh-0\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.844526 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-scripts\") pod \"aodh-0\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.846718 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-config-data\") pod \"aodh-0\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.862427 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvht4\" (UniqueName: \"kubernetes.io/projected/c3b4f02b-601b-4c1d-a9df-a488ce538760-kube-api-access-kvht4\") pod \"aodh-0\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " pod="openstack/aodh-0" Jan 23 16:41:09 crc kubenswrapper[4718]: I0123 16:41:09.925079 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 23 16:41:10 crc kubenswrapper[4718]: I0123 16:41:10.053876 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 16:41:10 crc kubenswrapper[4718]: I0123 16:41:10.054279 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 16:41:10 crc kubenswrapper[4718]: W0123 16:41:10.443736 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3b4f02b_601b_4c1d_a9df_a488ce538760.slice/crio-1ab3704f92887e05a58fd0d3007ea0f2d74d0757d977ec0f1a0e37afbf4a7040 WatchSource:0}: Error finding container 1ab3704f92887e05a58fd0d3007ea0f2d74d0757d977ec0f1a0e37afbf4a7040: Status 404 returned error can't find the container with id 1ab3704f92887e05a58fd0d3007ea0f2d74d0757d977ec0f1a0e37afbf4a7040 Jan 23 16:41:10 crc kubenswrapper[4718]: I0123 16:41:10.451545 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 23 16:41:10 crc kubenswrapper[4718]: I0123 16:41:10.505918 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c3b4f02b-601b-4c1d-a9df-a488ce538760","Type":"ContainerStarted","Data":"1ab3704f92887e05a58fd0d3007ea0f2d74d0757d977ec0f1a0e37afbf4a7040"} Jan 23 16:41:10 crc kubenswrapper[4718]: I0123 16:41:10.507779 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g4jgb" podUID="6b750c35-3d5d-48e8-a08f-4bc97b63ee81" containerName="registry-server" containerID="cri-o://a4252443fb75e14b0f1f427c57c3548a12bdae61d9b7f723b9a93eace7928353" gracePeriod=2 Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.153123 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55dfb1b9-ceec-4c77-aa2a-b5dffa296e39" path="/var/lib/kubelet/pods/55dfb1b9-ceec-4c77-aa2a-b5dffa296e39/volumes" Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.169947 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4jgb" Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.285233 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf2z2\" (UniqueName: \"kubernetes.io/projected/6b750c35-3d5d-48e8-a08f-4bc97b63ee81-kube-api-access-wf2z2\") pod \"6b750c35-3d5d-48e8-a08f-4bc97b63ee81\" (UID: \"6b750c35-3d5d-48e8-a08f-4bc97b63ee81\") " Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.285413 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b750c35-3d5d-48e8-a08f-4bc97b63ee81-catalog-content\") pod \"6b750c35-3d5d-48e8-a08f-4bc97b63ee81\" (UID: \"6b750c35-3d5d-48e8-a08f-4bc97b63ee81\") " Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.285715 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b750c35-3d5d-48e8-a08f-4bc97b63ee81-utilities\") pod \"6b750c35-3d5d-48e8-a08f-4bc97b63ee81\" (UID: \"6b750c35-3d5d-48e8-a08f-4bc97b63ee81\") " Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.286248 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b750c35-3d5d-48e8-a08f-4bc97b63ee81-utilities" (OuterVolumeSpecName: "utilities") pod "6b750c35-3d5d-48e8-a08f-4bc97b63ee81" (UID: "6b750c35-3d5d-48e8-a08f-4bc97b63ee81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.286765 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b750c35-3d5d-48e8-a08f-4bc97b63ee81-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.290809 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b750c35-3d5d-48e8-a08f-4bc97b63ee81-kube-api-access-wf2z2" (OuterVolumeSpecName: "kube-api-access-wf2z2") pod "6b750c35-3d5d-48e8-a08f-4bc97b63ee81" (UID: "6b750c35-3d5d-48e8-a08f-4bc97b63ee81"). InnerVolumeSpecName "kube-api-access-wf2z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.389535 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf2z2\" (UniqueName: \"kubernetes.io/projected/6b750c35-3d5d-48e8-a08f-4bc97b63ee81-kube-api-access-wf2z2\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.418907 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b750c35-3d5d-48e8-a08f-4bc97b63ee81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b750c35-3d5d-48e8-a08f-4bc97b63ee81" (UID: "6b750c35-3d5d-48e8-a08f-4bc97b63ee81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.492228 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b750c35-3d5d-48e8-a08f-4bc97b63ee81-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.522339 4718 generic.go:334] "Generic (PLEG): container finished" podID="6b750c35-3d5d-48e8-a08f-4bc97b63ee81" containerID="a4252443fb75e14b0f1f427c57c3548a12bdae61d9b7f723b9a93eace7928353" exitCode=0 Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.522400 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4jgb" event={"ID":"6b750c35-3d5d-48e8-a08f-4bc97b63ee81","Type":"ContainerDied","Data":"a4252443fb75e14b0f1f427c57c3548a12bdae61d9b7f723b9a93eace7928353"} Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.522452 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4jgb" event={"ID":"6b750c35-3d5d-48e8-a08f-4bc97b63ee81","Type":"ContainerDied","Data":"9547e162a25bbfae0293479d39a49cc415f56607677815cd9c2db7ce73637c2f"} Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.522469 4718 scope.go:117] "RemoveContainer" containerID="a4252443fb75e14b0f1f427c57c3548a12bdae61d9b7f723b9a93eace7928353" Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.524210 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4jgb" Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.525358 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c3b4f02b-601b-4c1d-a9df-a488ce538760","Type":"ContainerStarted","Data":"6da4c0cf9b5560a7ccabdaa68a0d7705c315361beffd7db3012e18578131118d"} Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.562369 4718 scope.go:117] "RemoveContainer" containerID="28e3102e96017487359179c192e0f601be72b31a332fbc3e311f3189f4f8f32a" Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.570823 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4jgb"] Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.583442 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g4jgb"] Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.599315 4718 scope.go:117] "RemoveContainer" containerID="e17124e24c3ffc2fdb8aaaa5e29a8d8f8f8934e1f5b94adfe91dc2d70eb7fdbe" Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.630083 4718 scope.go:117] "RemoveContainer" containerID="a4252443fb75e14b0f1f427c57c3548a12bdae61d9b7f723b9a93eace7928353" Jan 23 16:41:11 crc kubenswrapper[4718]: E0123 16:41:11.630401 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4252443fb75e14b0f1f427c57c3548a12bdae61d9b7f723b9a93eace7928353\": container with ID starting with a4252443fb75e14b0f1f427c57c3548a12bdae61d9b7f723b9a93eace7928353 not found: ID does not exist" containerID="a4252443fb75e14b0f1f427c57c3548a12bdae61d9b7f723b9a93eace7928353" Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.630445 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4252443fb75e14b0f1f427c57c3548a12bdae61d9b7f723b9a93eace7928353"} err="failed to get container status \"a4252443fb75e14b0f1f427c57c3548a12bdae61d9b7f723b9a93eace7928353\": rpc error: code = NotFound desc = could not find container \"a4252443fb75e14b0f1f427c57c3548a12bdae61d9b7f723b9a93eace7928353\": container with ID starting with a4252443fb75e14b0f1f427c57c3548a12bdae61d9b7f723b9a93eace7928353 not found: ID does not exist" Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.630474 4718 scope.go:117] "RemoveContainer" containerID="28e3102e96017487359179c192e0f601be72b31a332fbc3e311f3189f4f8f32a" Jan 23 16:41:11 crc kubenswrapper[4718]: E0123 16:41:11.631132 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28e3102e96017487359179c192e0f601be72b31a332fbc3e311f3189f4f8f32a\": container with ID starting with 28e3102e96017487359179c192e0f601be72b31a332fbc3e311f3189f4f8f32a not found: ID does not exist" containerID="28e3102e96017487359179c192e0f601be72b31a332fbc3e311f3189f4f8f32a" Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.631164 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28e3102e96017487359179c192e0f601be72b31a332fbc3e311f3189f4f8f32a"} err="failed to get container status \"28e3102e96017487359179c192e0f601be72b31a332fbc3e311f3189f4f8f32a\": rpc error: code = NotFound desc = could not find container \"28e3102e96017487359179c192e0f601be72b31a332fbc3e311f3189f4f8f32a\": container with ID starting with 28e3102e96017487359179c192e0f601be72b31a332fbc3e311f3189f4f8f32a not found: ID does not exist" Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.631195 4718 scope.go:117] "RemoveContainer" containerID="e17124e24c3ffc2fdb8aaaa5e29a8d8f8f8934e1f5b94adfe91dc2d70eb7fdbe" Jan 23 16:41:11 crc kubenswrapper[4718]: E0123 16:41:11.631520 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e17124e24c3ffc2fdb8aaaa5e29a8d8f8f8934e1f5b94adfe91dc2d70eb7fdbe\": container with ID starting with e17124e24c3ffc2fdb8aaaa5e29a8d8f8f8934e1f5b94adfe91dc2d70eb7fdbe not found: ID does not exist" containerID="e17124e24c3ffc2fdb8aaaa5e29a8d8f8f8934e1f5b94adfe91dc2d70eb7fdbe" Jan 23 16:41:11 crc kubenswrapper[4718]: I0123 16:41:11.631563 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e17124e24c3ffc2fdb8aaaa5e29a8d8f8f8934e1f5b94adfe91dc2d70eb7fdbe"} err="failed to get container status \"e17124e24c3ffc2fdb8aaaa5e29a8d8f8f8934e1f5b94adfe91dc2d70eb7fdbe\": rpc error: code = NotFound desc = could not find container \"e17124e24c3ffc2fdb8aaaa5e29a8d8f8f8934e1f5b94adfe91dc2d70eb7fdbe\": container with ID starting with e17124e24c3ffc2fdb8aaaa5e29a8d8f8f8934e1f5b94adfe91dc2d70eb7fdbe not found: ID does not exist" Jan 23 16:41:12 crc kubenswrapper[4718]: I0123 16:41:12.539308 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c3b4f02b-601b-4c1d-a9df-a488ce538760","Type":"ContainerStarted","Data":"8686c8b9c38bfc7ca7e06f7d04fa9335664c855a48b2702b71bf9fec552850de"} Jan 23 16:41:12 crc kubenswrapper[4718]: I0123 16:41:12.641733 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 16:41:12 crc kubenswrapper[4718]: I0123 16:41:12.697947 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 16:41:13 crc kubenswrapper[4718]: I0123 16:41:13.156033 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b750c35-3d5d-48e8-a08f-4bc97b63ee81" path="/var/lib/kubelet/pods/6b750c35-3d5d-48e8-a08f-4bc97b63ee81/volumes" Jan 23 16:41:13 crc kubenswrapper[4718]: I0123 16:41:13.570187 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c3b4f02b-601b-4c1d-a9df-a488ce538760","Type":"ContainerStarted","Data":"047145dc8c282168ddbc9efbf6c6a77dcd2ab64ea2bb104330925f84988605ac"} Jan 23 16:41:13 crc kubenswrapper[4718]: I0123 16:41:13.605615 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 16:41:14 crc kubenswrapper[4718]: I0123 16:41:14.583050 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c3b4f02b-601b-4c1d-a9df-a488ce538760","Type":"ContainerStarted","Data":"9212419d2a5665b0c08a71612aed5325028d609b2b90d96e88b95f6b34081d20"} Jan 23 16:41:14 crc kubenswrapper[4718]: I0123 16:41:14.614324 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.547083826 podStartE2EDuration="5.61430136s" podCreationTimestamp="2026-01-23 16:41:09 +0000 UTC" firstStartedPulling="2026-01-23 16:41:10.447533149 +0000 UTC m=+1471.594775130" lastFinishedPulling="2026-01-23 16:41:13.514750673 +0000 UTC m=+1474.661992664" observedRunningTime="2026-01-23 16:41:14.604253288 +0000 UTC m=+1475.751495299" watchObservedRunningTime="2026-01-23 16:41:14.61430136 +0000 UTC m=+1475.761543351" Jan 23 16:41:15 crc kubenswrapper[4718]: I0123 16:41:15.053549 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 16:41:15 crc kubenswrapper[4718]: I0123 16:41:15.055295 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 16:41:16 crc kubenswrapper[4718]: I0123 16:41:16.064790 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="82c5d1a7-2493-4399-9a20-247f71a1c754" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.15:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 16:41:16 crc kubenswrapper[4718]: I0123 16:41:16.064836 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="82c5d1a7-2493-4399-9a20-247f71a1c754" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.15:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 16:41:16 crc kubenswrapper[4718]: I0123 16:41:16.823820 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 16:41:16 crc kubenswrapper[4718]: I0123 16:41:16.825830 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 16:41:17 crc kubenswrapper[4718]: I0123 16:41:17.837663 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="353b7e73-13e9-4989-8f55-5dedebe8e92a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.16:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 16:41:17 crc kubenswrapper[4718]: I0123 16:41:17.837836 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="353b7e73-13e9-4989-8f55-5dedebe8e92a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.16:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 16:41:22 crc kubenswrapper[4718]: I0123 16:41:22.555912 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 16:41:25 crc kubenswrapper[4718]: I0123 16:41:25.063344 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 16:41:25 crc kubenswrapper[4718]: I0123 16:41:25.069051 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 16:41:25 crc kubenswrapper[4718]: I0123 16:41:25.072742 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 16:41:25 crc kubenswrapper[4718]: I0123 16:41:25.759110 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 16:41:26 crc kubenswrapper[4718]: I0123 16:41:26.830276 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 16:41:26 crc kubenswrapper[4718]: I0123 16:41:26.830899 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 16:41:26 crc kubenswrapper[4718]: I0123 16:41:26.834253 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 16:41:26 crc kubenswrapper[4718]: I0123 16:41:26.840090 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 16:41:26 crc kubenswrapper[4718]: I0123 16:41:26.940802 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9fshn"] Jan 23 16:41:26 crc kubenswrapper[4718]: E0123 16:41:26.941578 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b750c35-3d5d-48e8-a08f-4bc97b63ee81" containerName="extract-utilities" Jan 23 16:41:26 crc kubenswrapper[4718]: I0123 16:41:26.941776 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b750c35-3d5d-48e8-a08f-4bc97b63ee81" containerName="extract-utilities" Jan 23 16:41:26 crc kubenswrapper[4718]: E0123 16:41:26.941860 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b750c35-3d5d-48e8-a08f-4bc97b63ee81" containerName="extract-content" Jan 23 16:41:26 crc kubenswrapper[4718]: I0123 16:41:26.941917 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b750c35-3d5d-48e8-a08f-4bc97b63ee81" containerName="extract-content" Jan 23 16:41:26 crc kubenswrapper[4718]: E0123 16:41:26.942033 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b750c35-3d5d-48e8-a08f-4bc97b63ee81" containerName="registry-server" Jan 23 16:41:26 crc kubenswrapper[4718]: I0123 16:41:26.942117 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b750c35-3d5d-48e8-a08f-4bc97b63ee81" containerName="registry-server" Jan 23 16:41:26 crc kubenswrapper[4718]: I0123 16:41:26.942443 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b750c35-3d5d-48e8-a08f-4bc97b63ee81" containerName="registry-server" Jan 23 16:41:26 crc kubenswrapper[4718]: I0123 16:41:26.945456 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9fshn" Jan 23 16:41:26 crc kubenswrapper[4718]: I0123 16:41:26.991739 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9fshn"] Jan 23 16:41:27 crc kubenswrapper[4718]: I0123 16:41:27.081978 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2b35ff0-0e22-4d60-a4dc-9b724ea67064-utilities\") pod \"certified-operators-9fshn\" (UID: \"a2b35ff0-0e22-4d60-a4dc-9b724ea67064\") " pod="openshift-marketplace/certified-operators-9fshn" Jan 23 16:41:27 crc kubenswrapper[4718]: I0123 16:41:27.082535 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2b35ff0-0e22-4d60-a4dc-9b724ea67064-catalog-content\") pod \"certified-operators-9fshn\" (UID: \"a2b35ff0-0e22-4d60-a4dc-9b724ea67064\") " pod="openshift-marketplace/certified-operators-9fshn" Jan 23 16:41:27 crc kubenswrapper[4718]: I0123 16:41:27.082587 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lvn6\" (UniqueName: \"kubernetes.io/projected/a2b35ff0-0e22-4d60-a4dc-9b724ea67064-kube-api-access-6lvn6\") pod \"certified-operators-9fshn\" (UID: \"a2b35ff0-0e22-4d60-a4dc-9b724ea67064\") " pod="openshift-marketplace/certified-operators-9fshn" Jan 23 16:41:27 crc kubenswrapper[4718]: I0123 16:41:27.184860 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2b35ff0-0e22-4d60-a4dc-9b724ea67064-utilities\") pod \"certified-operators-9fshn\" (UID: \"a2b35ff0-0e22-4d60-a4dc-9b724ea67064\") " pod="openshift-marketplace/certified-operators-9fshn" Jan 23 16:41:27 crc kubenswrapper[4718]: I0123 16:41:27.184933 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2b35ff0-0e22-4d60-a4dc-9b724ea67064-catalog-content\") pod \"certified-operators-9fshn\" (UID: \"a2b35ff0-0e22-4d60-a4dc-9b724ea67064\") " pod="openshift-marketplace/certified-operators-9fshn" Jan 23 16:41:27 crc kubenswrapper[4718]: I0123 16:41:27.184984 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lvn6\" (UniqueName: \"kubernetes.io/projected/a2b35ff0-0e22-4d60-a4dc-9b724ea67064-kube-api-access-6lvn6\") pod \"certified-operators-9fshn\" (UID: \"a2b35ff0-0e22-4d60-a4dc-9b724ea67064\") " pod="openshift-marketplace/certified-operators-9fshn" Jan 23 16:41:27 crc kubenswrapper[4718]: I0123 16:41:27.185495 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2b35ff0-0e22-4d60-a4dc-9b724ea67064-utilities\") pod \"certified-operators-9fshn\" (UID: \"a2b35ff0-0e22-4d60-a4dc-9b724ea67064\") " pod="openshift-marketplace/certified-operators-9fshn" Jan 23 16:41:27 crc kubenswrapper[4718]: I0123 16:41:27.185501 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2b35ff0-0e22-4d60-a4dc-9b724ea67064-catalog-content\") pod \"certified-operators-9fshn\" (UID: \"a2b35ff0-0e22-4d60-a4dc-9b724ea67064\") " pod="openshift-marketplace/certified-operators-9fshn" Jan 23 16:41:27 crc kubenswrapper[4718]: I0123 16:41:27.206316 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lvn6\" (UniqueName: \"kubernetes.io/projected/a2b35ff0-0e22-4d60-a4dc-9b724ea67064-kube-api-access-6lvn6\") pod \"certified-operators-9fshn\" (UID: \"a2b35ff0-0e22-4d60-a4dc-9b724ea67064\") " pod="openshift-marketplace/certified-operators-9fshn" Jan 23 16:41:27 crc kubenswrapper[4718]: I0123 16:41:27.280934 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9fshn" Jan 23 16:41:27 crc kubenswrapper[4718]: I0123 16:41:27.775783 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 16:41:27 crc kubenswrapper[4718]: I0123 16:41:27.789963 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 16:41:27 crc kubenswrapper[4718]: W0123 16:41:27.833532 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2b35ff0_0e22_4d60_a4dc_9b724ea67064.slice/crio-01dd39ac01bb1b80c52f3092828e609bc43a2c778cb0cc6a52fd491e80b0ea85 WatchSource:0}: Error finding container 01dd39ac01bb1b80c52f3092828e609bc43a2c778cb0cc6a52fd491e80b0ea85: Status 404 returned error can't find the container with id 01dd39ac01bb1b80c52f3092828e609bc43a2c778cb0cc6a52fd491e80b0ea85 Jan 23 16:41:27 crc kubenswrapper[4718]: I0123 16:41:27.847753 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9fshn"] Jan 23 16:41:27 crc kubenswrapper[4718]: I0123 16:41:27.889615 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 16:41:27 crc kubenswrapper[4718]: I0123 16:41:27.890204 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="ee115b8e-40cf-4641-acd3-13132054a9b7" containerName="kube-state-metrics" containerID="cri-o://4a0ed19db5ccddc950959c3ff7b9edcafdf4b4f5c8a6e0fd82d0ab7abffb1eee" gracePeriod=30 Jan 23 16:41:27 crc kubenswrapper[4718]: I0123 16:41:27.972452 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 23 16:41:27 crc kubenswrapper[4718]: I0123 16:41:27.973133 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="533ff73f-ffa0-41ae-a58a-9ef5491270e6" containerName="mysqld-exporter" containerID="cri-o://15a4e11d21cd795ac7bbc3f7e3bb3d13d7f093434abde5bf189df2d14f85589a" gracePeriod=30 Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.584089 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.633741 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m87f4\" (UniqueName: \"kubernetes.io/projected/ee115b8e-40cf-4641-acd3-13132054a9b7-kube-api-access-m87f4\") pod \"ee115b8e-40cf-4641-acd3-13132054a9b7\" (UID: \"ee115b8e-40cf-4641-acd3-13132054a9b7\") " Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.642786 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee115b8e-40cf-4641-acd3-13132054a9b7-kube-api-access-m87f4" (OuterVolumeSpecName: "kube-api-access-m87f4") pod "ee115b8e-40cf-4641-acd3-13132054a9b7" (UID: "ee115b8e-40cf-4641-acd3-13132054a9b7"). InnerVolumeSpecName "kube-api-access-m87f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.739589 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m87f4\" (UniqueName: \"kubernetes.io/projected/ee115b8e-40cf-4641-acd3-13132054a9b7-kube-api-access-m87f4\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.776532 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.808890 4718 generic.go:334] "Generic (PLEG): container finished" podID="533ff73f-ffa0-41ae-a58a-9ef5491270e6" containerID="15a4e11d21cd795ac7bbc3f7e3bb3d13d7f093434abde5bf189df2d14f85589a" exitCode=2 Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.808998 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"533ff73f-ffa0-41ae-a58a-9ef5491270e6","Type":"ContainerDied","Data":"15a4e11d21cd795ac7bbc3f7e3bb3d13d7f093434abde5bf189df2d14f85589a"} Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.809030 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"533ff73f-ffa0-41ae-a58a-9ef5491270e6","Type":"ContainerDied","Data":"393376dd6e2d9aa58a79f00d3e5c91651d6950742d8dd7bf85ef1035d869e3b8"} Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.809050 4718 scope.go:117] "RemoveContainer" containerID="15a4e11d21cd795ac7bbc3f7e3bb3d13d7f093434abde5bf189df2d14f85589a" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.809641 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.813546 4718 generic.go:334] "Generic (PLEG): container finished" podID="a2b35ff0-0e22-4d60-a4dc-9b724ea67064" containerID="e9ff2a380465ee68e8f85eea248e87f74ff2434a17996e9896666049aab70836" exitCode=0 Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.813600 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fshn" event={"ID":"a2b35ff0-0e22-4d60-a4dc-9b724ea67064","Type":"ContainerDied","Data":"e9ff2a380465ee68e8f85eea248e87f74ff2434a17996e9896666049aab70836"} Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.813760 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fshn" event={"ID":"a2b35ff0-0e22-4d60-a4dc-9b724ea67064","Type":"ContainerStarted","Data":"01dd39ac01bb1b80c52f3092828e609bc43a2c778cb0cc6a52fd491e80b0ea85"} Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.816210 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.836331 4718 generic.go:334] "Generic (PLEG): container finished" podID="ee115b8e-40cf-4641-acd3-13132054a9b7" containerID="4a0ed19db5ccddc950959c3ff7b9edcafdf4b4f5c8a6e0fd82d0ab7abffb1eee" exitCode=2 Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.837408 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.839028 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ee115b8e-40cf-4641-acd3-13132054a9b7","Type":"ContainerDied","Data":"4a0ed19db5ccddc950959c3ff7b9edcafdf4b4f5c8a6e0fd82d0ab7abffb1eee"} Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.839074 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ee115b8e-40cf-4641-acd3-13132054a9b7","Type":"ContainerDied","Data":"d50f847a2901d4df7835dc73d0cf3443854e05e4ecf4cfe32a2b8f9a40619fd4"} Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.840880 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533ff73f-ffa0-41ae-a58a-9ef5491270e6-config-data\") pod \"533ff73f-ffa0-41ae-a58a-9ef5491270e6\" (UID: \"533ff73f-ffa0-41ae-a58a-9ef5491270e6\") " Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.841121 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533ff73f-ffa0-41ae-a58a-9ef5491270e6-combined-ca-bundle\") pod \"533ff73f-ffa0-41ae-a58a-9ef5491270e6\" (UID: \"533ff73f-ffa0-41ae-a58a-9ef5491270e6\") " Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.841343 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p59cf\" (UniqueName: \"kubernetes.io/projected/533ff73f-ffa0-41ae-a58a-9ef5491270e6-kube-api-access-p59cf\") pod \"533ff73f-ffa0-41ae-a58a-9ef5491270e6\" (UID: \"533ff73f-ffa0-41ae-a58a-9ef5491270e6\") " Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.857595 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/533ff73f-ffa0-41ae-a58a-9ef5491270e6-kube-api-access-p59cf" (OuterVolumeSpecName: "kube-api-access-p59cf") pod "533ff73f-ffa0-41ae-a58a-9ef5491270e6" (UID: "533ff73f-ffa0-41ae-a58a-9ef5491270e6"). InnerVolumeSpecName "kube-api-access-p59cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.859446 4718 scope.go:117] "RemoveContainer" containerID="15a4e11d21cd795ac7bbc3f7e3bb3d13d7f093434abde5bf189df2d14f85589a" Jan 23 16:41:28 crc kubenswrapper[4718]: E0123 16:41:28.860379 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15a4e11d21cd795ac7bbc3f7e3bb3d13d7f093434abde5bf189df2d14f85589a\": container with ID starting with 15a4e11d21cd795ac7bbc3f7e3bb3d13d7f093434abde5bf189df2d14f85589a not found: ID does not exist" containerID="15a4e11d21cd795ac7bbc3f7e3bb3d13d7f093434abde5bf189df2d14f85589a" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.860491 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15a4e11d21cd795ac7bbc3f7e3bb3d13d7f093434abde5bf189df2d14f85589a"} err="failed to get container status \"15a4e11d21cd795ac7bbc3f7e3bb3d13d7f093434abde5bf189df2d14f85589a\": rpc error: code = NotFound desc = could not find container \"15a4e11d21cd795ac7bbc3f7e3bb3d13d7f093434abde5bf189df2d14f85589a\": container with ID starting with 15a4e11d21cd795ac7bbc3f7e3bb3d13d7f093434abde5bf189df2d14f85589a not found: ID does not exist" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.860563 4718 scope.go:117] "RemoveContainer" containerID="4a0ed19db5ccddc950959c3ff7b9edcafdf4b4f5c8a6e0fd82d0ab7abffb1eee" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.875742 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.875795 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.875836 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.878322 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f9740f575ccf5aef232552297b1345164a1e07af1b6f8f7ad7a166d05348d0a"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.878409 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://6f9740f575ccf5aef232552297b1345164a1e07af1b6f8f7ad7a166d05348d0a" gracePeriod=600 Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.900868 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.912027 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533ff73f-ffa0-41ae-a58a-9ef5491270e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "533ff73f-ffa0-41ae-a58a-9ef5491270e6" (UID: "533ff73f-ffa0-41ae-a58a-9ef5491270e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.926817 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.932876 4718 scope.go:117] "RemoveContainer" containerID="4a0ed19db5ccddc950959c3ff7b9edcafdf4b4f5c8a6e0fd82d0ab7abffb1eee" Jan 23 16:41:28 crc kubenswrapper[4718]: E0123 16:41:28.933691 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a0ed19db5ccddc950959c3ff7b9edcafdf4b4f5c8a6e0fd82d0ab7abffb1eee\": container with ID starting with 4a0ed19db5ccddc950959c3ff7b9edcafdf4b4f5c8a6e0fd82d0ab7abffb1eee not found: ID does not exist" containerID="4a0ed19db5ccddc950959c3ff7b9edcafdf4b4f5c8a6e0fd82d0ab7abffb1eee" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.933718 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a0ed19db5ccddc950959c3ff7b9edcafdf4b4f5c8a6e0fd82d0ab7abffb1eee"} err="failed to get container status \"4a0ed19db5ccddc950959c3ff7b9edcafdf4b4f5c8a6e0fd82d0ab7abffb1eee\": rpc error: code = NotFound desc = could not find container \"4a0ed19db5ccddc950959c3ff7b9edcafdf4b4f5c8a6e0fd82d0ab7abffb1eee\": container with ID starting with 4a0ed19db5ccddc950959c3ff7b9edcafdf4b4f5c8a6e0fd82d0ab7abffb1eee not found: ID does not exist" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.947542 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 16:41:28 crc kubenswrapper[4718]: E0123 16:41:28.948163 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533ff73f-ffa0-41ae-a58a-9ef5491270e6" containerName="mysqld-exporter" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.948185 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="533ff73f-ffa0-41ae-a58a-9ef5491270e6" containerName="mysqld-exporter" Jan 23 16:41:28 crc kubenswrapper[4718]: E0123 16:41:28.948242 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee115b8e-40cf-4641-acd3-13132054a9b7" containerName="kube-state-metrics" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.948249 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee115b8e-40cf-4641-acd3-13132054a9b7" containerName="kube-state-metrics" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.948476 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee115b8e-40cf-4641-acd3-13132054a9b7" containerName="kube-state-metrics" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.948503 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="533ff73f-ffa0-41ae-a58a-9ef5491270e6" containerName="mysqld-exporter" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.949382 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.951613 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p59cf\" (UniqueName: \"kubernetes.io/projected/533ff73f-ffa0-41ae-a58a-9ef5491270e6-kube-api-access-p59cf\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.951659 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533ff73f-ffa0-41ae-a58a-9ef5491270e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.952578 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.953572 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.966004 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 16:41:28 crc kubenswrapper[4718]: I0123 16:41:28.990729 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533ff73f-ffa0-41ae-a58a-9ef5491270e6-config-data" (OuterVolumeSpecName: "config-data") pod "533ff73f-ffa0-41ae-a58a-9ef5491270e6" (UID: "533ff73f-ffa0-41ae-a58a-9ef5491270e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.055462 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8ee3d2a0-3f10-40d9-980c-deb1bc35b613-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8ee3d2a0-3f10-40d9-980c-deb1bc35b613\") " pod="openstack/kube-state-metrics-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.055543 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26jkj\" (UniqueName: \"kubernetes.io/projected/8ee3d2a0-3f10-40d9-980c-deb1bc35b613-kube-api-access-26jkj\") pod \"kube-state-metrics-0\" (UID: \"8ee3d2a0-3f10-40d9-980c-deb1bc35b613\") " pod="openstack/kube-state-metrics-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.055622 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee3d2a0-3f10-40d9-980c-deb1bc35b613-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8ee3d2a0-3f10-40d9-980c-deb1bc35b613\") " pod="openstack/kube-state-metrics-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.055889 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ee3d2a0-3f10-40d9-980c-deb1bc35b613-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8ee3d2a0-3f10-40d9-980c-deb1bc35b613\") " pod="openstack/kube-state-metrics-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.056067 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533ff73f-ffa0-41ae-a58a-9ef5491270e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.178856 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8ee3d2a0-3f10-40d9-980c-deb1bc35b613-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8ee3d2a0-3f10-40d9-980c-deb1bc35b613\") " pod="openstack/kube-state-metrics-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.179080 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26jkj\" (UniqueName: \"kubernetes.io/projected/8ee3d2a0-3f10-40d9-980c-deb1bc35b613-kube-api-access-26jkj\") pod \"kube-state-metrics-0\" (UID: \"8ee3d2a0-3f10-40d9-980c-deb1bc35b613\") " pod="openstack/kube-state-metrics-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.179908 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee3d2a0-3f10-40d9-980c-deb1bc35b613-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8ee3d2a0-3f10-40d9-980c-deb1bc35b613\") " pod="openstack/kube-state-metrics-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.180077 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ee3d2a0-3f10-40d9-980c-deb1bc35b613-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8ee3d2a0-3f10-40d9-980c-deb1bc35b613\") " pod="openstack/kube-state-metrics-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.205943 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ee3d2a0-3f10-40d9-980c-deb1bc35b613-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8ee3d2a0-3f10-40d9-980c-deb1bc35b613\") " pod="openstack/kube-state-metrics-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.217839 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee3d2a0-3f10-40d9-980c-deb1bc35b613-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8ee3d2a0-3f10-40d9-980c-deb1bc35b613\") " pod="openstack/kube-state-metrics-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.234054 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8ee3d2a0-3f10-40d9-980c-deb1bc35b613-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8ee3d2a0-3f10-40d9-980c-deb1bc35b613\") " pod="openstack/kube-state-metrics-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.234835 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26jkj\" (UniqueName: \"kubernetes.io/projected/8ee3d2a0-3f10-40d9-980c-deb1bc35b613-kube-api-access-26jkj\") pod \"kube-state-metrics-0\" (UID: \"8ee3d2a0-3f10-40d9-980c-deb1bc35b613\") " pod="openstack/kube-state-metrics-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.265839 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee115b8e-40cf-4641-acd3-13132054a9b7" path="/var/lib/kubelet/pods/ee115b8e-40cf-4641-acd3-13132054a9b7/volumes" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.266654 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.267012 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.281893 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.289081 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.293380 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.299128 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.299340 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.304305 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.393856 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa116646-6ee2-42f2-8a0f-56459516d495-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"fa116646-6ee2-42f2-8a0f-56459516d495\") " pod="openstack/mysqld-exporter-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.393932 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmqqt\" (UniqueName: \"kubernetes.io/projected/fa116646-6ee2-42f2-8a0f-56459516d495-kube-api-access-jmqqt\") pod \"mysqld-exporter-0\" (UID: \"fa116646-6ee2-42f2-8a0f-56459516d495\") " pod="openstack/mysqld-exporter-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.394214 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa116646-6ee2-42f2-8a0f-56459516d495-config-data\") pod \"mysqld-exporter-0\" (UID: \"fa116646-6ee2-42f2-8a0f-56459516d495\") " pod="openstack/mysqld-exporter-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.394463 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa116646-6ee2-42f2-8a0f-56459516d495-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"fa116646-6ee2-42f2-8a0f-56459516d495\") " pod="openstack/mysqld-exporter-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.503211 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa116646-6ee2-42f2-8a0f-56459516d495-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"fa116646-6ee2-42f2-8a0f-56459516d495\") " pod="openstack/mysqld-exporter-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.503718 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmqqt\" (UniqueName: \"kubernetes.io/projected/fa116646-6ee2-42f2-8a0f-56459516d495-kube-api-access-jmqqt\") pod \"mysqld-exporter-0\" (UID: \"fa116646-6ee2-42f2-8a0f-56459516d495\") " pod="openstack/mysqld-exporter-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.503787 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa116646-6ee2-42f2-8a0f-56459516d495-config-data\") pod \"mysqld-exporter-0\" (UID: \"fa116646-6ee2-42f2-8a0f-56459516d495\") " pod="openstack/mysqld-exporter-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.503865 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa116646-6ee2-42f2-8a0f-56459516d495-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"fa116646-6ee2-42f2-8a0f-56459516d495\") " pod="openstack/mysqld-exporter-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.512656 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa116646-6ee2-42f2-8a0f-56459516d495-config-data\") pod \"mysqld-exporter-0\" (UID: \"fa116646-6ee2-42f2-8a0f-56459516d495\") " pod="openstack/mysqld-exporter-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.520210 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa116646-6ee2-42f2-8a0f-56459516d495-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"fa116646-6ee2-42f2-8a0f-56459516d495\") " pod="openstack/mysqld-exporter-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.520480 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa116646-6ee2-42f2-8a0f-56459516d495-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"fa116646-6ee2-42f2-8a0f-56459516d495\") " pod="openstack/mysqld-exporter-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.542034 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmqqt\" (UniqueName: \"kubernetes.io/projected/fa116646-6ee2-42f2-8a0f-56459516d495-kube-api-access-jmqqt\") pod \"mysqld-exporter-0\" (UID: \"fa116646-6ee2-42f2-8a0f-56459516d495\") " pod="openstack/mysqld-exporter-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.736502 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.891837 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.895524 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="6f9740f575ccf5aef232552297b1345164a1e07af1b6f8f7ad7a166d05348d0a" exitCode=0 Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.896672 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"6f9740f575ccf5aef232552297b1345164a1e07af1b6f8f7ad7a166d05348d0a"} Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.896701 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec"} Jan 23 16:41:29 crc kubenswrapper[4718]: I0123 16:41:29.896719 4718 scope.go:117] "RemoveContainer" containerID="90a1176f010d8fdadb1a7f6d6d0caefb9ea6ac28d367938b6700683923e3d094" Jan 23 16:41:29 crc kubenswrapper[4718]: W0123 16:41:29.916617 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ee3d2a0_3f10_40d9_980c_deb1bc35b613.slice/crio-1078c95d21ac6a1c569ac345d63f044262c75472153053622ac4e1a8fe14cca8 WatchSource:0}: Error finding container 1078c95d21ac6a1c569ac345d63f044262c75472153053622ac4e1a8fe14cca8: Status 404 returned error can't find the container with id 1078c95d21ac6a1c569ac345d63f044262c75472153053622ac4e1a8fe14cca8 Jan 23 16:41:30 crc kubenswrapper[4718]: I0123 16:41:30.281547 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 23 16:41:30 crc kubenswrapper[4718]: I0123 16:41:30.765604 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:41:30 crc kubenswrapper[4718]: I0123 16:41:30.766612 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="47e154a7-be17-400e-b268-384d47e31bf7" containerName="ceilometer-central-agent" containerID="cri-o://ab686f5b82d2864b8328c21499fc44f758d76099f003392f77359c9ae187bb01" gracePeriod=30 Jan 23 16:41:30 crc kubenswrapper[4718]: I0123 16:41:30.766733 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="47e154a7-be17-400e-b268-384d47e31bf7" containerName="proxy-httpd" containerID="cri-o://6410293a9c41f8f1392481e4fb5425342129881d0542fbd978d3b6f15b8b7070" gracePeriod=30 Jan 23 16:41:30 crc kubenswrapper[4718]: I0123 16:41:30.766733 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="47e154a7-be17-400e-b268-384d47e31bf7" containerName="ceilometer-notification-agent" containerID="cri-o://3dd469f42748dfe81a915a49f871c71ccb21da5e26f11c1e82f8b68c565a0b18" gracePeriod=30 Jan 23 16:41:30 crc kubenswrapper[4718]: I0123 16:41:30.766753 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="47e154a7-be17-400e-b268-384d47e31bf7" containerName="sg-core" containerID="cri-o://157a6f569f802ad83239aa415e01878de84fd9941e17170133523f88cc64ad34" gracePeriod=30 Jan 23 16:41:30 crc kubenswrapper[4718]: I0123 16:41:30.921281 4718 generic.go:334] "Generic (PLEG): container finished" podID="47e154a7-be17-400e-b268-384d47e31bf7" containerID="6410293a9c41f8f1392481e4fb5425342129881d0542fbd978d3b6f15b8b7070" exitCode=0 Jan 23 16:41:30 crc kubenswrapper[4718]: I0123 16:41:30.921328 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47e154a7-be17-400e-b268-384d47e31bf7","Type":"ContainerDied","Data":"6410293a9c41f8f1392481e4fb5425342129881d0542fbd978d3b6f15b8b7070"} Jan 23 16:41:30 crc kubenswrapper[4718]: I0123 16:41:30.927898 4718 generic.go:334] "Generic (PLEG): container finished" podID="a2b35ff0-0e22-4d60-a4dc-9b724ea67064" containerID="fe534dffc4a32b3f054e551210dac1b1286a0afb2e0978bac674ea22598e5775" exitCode=0 Jan 23 16:41:30 crc kubenswrapper[4718]: I0123 16:41:30.927963 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fshn" event={"ID":"a2b35ff0-0e22-4d60-a4dc-9b724ea67064","Type":"ContainerDied","Data":"fe534dffc4a32b3f054e551210dac1b1286a0afb2e0978bac674ea22598e5775"} Jan 23 16:41:30 crc kubenswrapper[4718]: I0123 16:41:30.946384 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"fa116646-6ee2-42f2-8a0f-56459516d495","Type":"ContainerStarted","Data":"769ab8bab84e3be6e171b7c89c1df50a635f812f0a56ac64f4936140e9cb3d89"} Jan 23 16:41:31 crc kubenswrapper[4718]: I0123 16:41:31.016855 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8ee3d2a0-3f10-40d9-980c-deb1bc35b613","Type":"ContainerStarted","Data":"99e0cb05f446b11e673470f5dfda3eb2148da42ed0a2e84cec0e9b5a2b48b25d"} Jan 23 16:41:31 crc kubenswrapper[4718]: I0123 16:41:31.017255 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8ee3d2a0-3f10-40d9-980c-deb1bc35b613","Type":"ContainerStarted","Data":"1078c95d21ac6a1c569ac345d63f044262c75472153053622ac4e1a8fe14cca8"} Jan 23 16:41:31 crc kubenswrapper[4718]: I0123 16:41:31.017296 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 16:41:31 crc kubenswrapper[4718]: I0123 16:41:31.061625 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.630918837 podStartE2EDuration="3.061603436s" podCreationTimestamp="2026-01-23 16:41:28 +0000 UTC" firstStartedPulling="2026-01-23 16:41:29.925668307 +0000 UTC m=+1491.072910288" lastFinishedPulling="2026-01-23 16:41:30.356352896 +0000 UTC m=+1491.503594887" observedRunningTime="2026-01-23 16:41:31.034523944 +0000 UTC m=+1492.181765935" watchObservedRunningTime="2026-01-23 16:41:31.061603436 +0000 UTC m=+1492.208845427" Jan 23 16:41:31 crc kubenswrapper[4718]: I0123 16:41:31.155684 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="533ff73f-ffa0-41ae-a58a-9ef5491270e6" path="/var/lib/kubelet/pods/533ff73f-ffa0-41ae-a58a-9ef5491270e6/volumes" Jan 23 16:41:32 crc kubenswrapper[4718]: I0123 16:41:32.089405 4718 generic.go:334] "Generic (PLEG): container finished" podID="47e154a7-be17-400e-b268-384d47e31bf7" containerID="157a6f569f802ad83239aa415e01878de84fd9941e17170133523f88cc64ad34" exitCode=2 Jan 23 16:41:32 crc kubenswrapper[4718]: I0123 16:41:32.090868 4718 generic.go:334] "Generic (PLEG): container finished" podID="47e154a7-be17-400e-b268-384d47e31bf7" containerID="ab686f5b82d2864b8328c21499fc44f758d76099f003392f77359c9ae187bb01" exitCode=0 Jan 23 16:41:32 crc kubenswrapper[4718]: I0123 16:41:32.091152 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47e154a7-be17-400e-b268-384d47e31bf7","Type":"ContainerDied","Data":"157a6f569f802ad83239aa415e01878de84fd9941e17170133523f88cc64ad34"} Jan 23 16:41:32 crc kubenswrapper[4718]: I0123 16:41:32.091188 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47e154a7-be17-400e-b268-384d47e31bf7","Type":"ContainerDied","Data":"ab686f5b82d2864b8328c21499fc44f758d76099f003392f77359c9ae187bb01"} Jan 23 16:41:32 crc kubenswrapper[4718]: I0123 16:41:32.110318 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fshn" event={"ID":"a2b35ff0-0e22-4d60-a4dc-9b724ea67064","Type":"ContainerStarted","Data":"3e86756690044f89fbc0b258c3565e503ec9fb3bd85c5826dc5ff23abf418fc0"} Jan 23 16:41:32 crc kubenswrapper[4718]: I0123 16:41:32.124418 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"fa116646-6ee2-42f2-8a0f-56459516d495","Type":"ContainerStarted","Data":"3d29e8c2cf619c3c09370facc247e62ae98fb42e888a2e5b28e9510f55168deb"} Jan 23 16:41:32 crc kubenswrapper[4718]: I0123 16:41:32.165013 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9fshn" podStartSLOduration=3.5577962640000003 podStartE2EDuration="6.164993987s" podCreationTimestamp="2026-01-23 16:41:26 +0000 UTC" firstStartedPulling="2026-01-23 16:41:28.815983776 +0000 UTC m=+1489.963225767" lastFinishedPulling="2026-01-23 16:41:31.423181489 +0000 UTC m=+1492.570423490" observedRunningTime="2026-01-23 16:41:32.141575414 +0000 UTC m=+1493.288817415" watchObservedRunningTime="2026-01-23 16:41:32.164993987 +0000 UTC m=+1493.312235978" Jan 23 16:41:32 crc kubenswrapper[4718]: I0123 16:41:32.212526 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.6918332190000003 podStartE2EDuration="3.212510071s" podCreationTimestamp="2026-01-23 16:41:29 +0000 UTC" firstStartedPulling="2026-01-23 16:41:30.285273695 +0000 UTC m=+1491.432515686" lastFinishedPulling="2026-01-23 16:41:30.805950537 +0000 UTC m=+1491.953192538" observedRunningTime="2026-01-23 16:41:32.189969572 +0000 UTC m=+1493.337211553" watchObservedRunningTime="2026-01-23 16:41:32.212510071 +0000 UTC m=+1493.359752062" Jan 23 16:41:34 crc kubenswrapper[4718]: I0123 16:41:34.903167 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:41:34 crc kubenswrapper[4718]: I0123 16:41:34.977358 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-combined-ca-bundle\") pod \"47e154a7-be17-400e-b268-384d47e31bf7\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " Jan 23 16:41:34 crc kubenswrapper[4718]: I0123 16:41:34.978096 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-config-data\") pod \"47e154a7-be17-400e-b268-384d47e31bf7\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " Jan 23 16:41:34 crc kubenswrapper[4718]: I0123 16:41:34.978141 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47e154a7-be17-400e-b268-384d47e31bf7-log-httpd\") pod \"47e154a7-be17-400e-b268-384d47e31bf7\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " Jan 23 16:41:34 crc kubenswrapper[4718]: I0123 16:41:34.978310 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-scripts\") pod \"47e154a7-be17-400e-b268-384d47e31bf7\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " Jan 23 16:41:34 crc kubenswrapper[4718]: I0123 16:41:34.978359 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4c6x\" (UniqueName: \"kubernetes.io/projected/47e154a7-be17-400e-b268-384d47e31bf7-kube-api-access-p4c6x\") pod \"47e154a7-be17-400e-b268-384d47e31bf7\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " Jan 23 16:41:34 crc kubenswrapper[4718]: I0123 16:41:34.978409 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-sg-core-conf-yaml\") pod \"47e154a7-be17-400e-b268-384d47e31bf7\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " Jan 23 16:41:34 crc kubenswrapper[4718]: I0123 16:41:34.978443 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47e154a7-be17-400e-b268-384d47e31bf7-run-httpd\") pod \"47e154a7-be17-400e-b268-384d47e31bf7\" (UID: \"47e154a7-be17-400e-b268-384d47e31bf7\") " Jan 23 16:41:34 crc kubenswrapper[4718]: I0123 16:41:34.979038 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47e154a7-be17-400e-b268-384d47e31bf7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "47e154a7-be17-400e-b268-384d47e31bf7" (UID: "47e154a7-be17-400e-b268-384d47e31bf7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:41:34 crc kubenswrapper[4718]: I0123 16:41:34.979252 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47e154a7-be17-400e-b268-384d47e31bf7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "47e154a7-be17-400e-b268-384d47e31bf7" (UID: "47e154a7-be17-400e-b268-384d47e31bf7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:41:34 crc kubenswrapper[4718]: I0123 16:41:34.979299 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47e154a7-be17-400e-b268-384d47e31bf7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.001211 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-scripts" (OuterVolumeSpecName: "scripts") pod "47e154a7-be17-400e-b268-384d47e31bf7" (UID: "47e154a7-be17-400e-b268-384d47e31bf7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.004056 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47e154a7-be17-400e-b268-384d47e31bf7-kube-api-access-p4c6x" (OuterVolumeSpecName: "kube-api-access-p4c6x") pod "47e154a7-be17-400e-b268-384d47e31bf7" (UID: "47e154a7-be17-400e-b268-384d47e31bf7"). InnerVolumeSpecName "kube-api-access-p4c6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.021092 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "47e154a7-be17-400e-b268-384d47e31bf7" (UID: "47e154a7-be17-400e-b268-384d47e31bf7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.081662 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.081697 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4c6x\" (UniqueName: \"kubernetes.io/projected/47e154a7-be17-400e-b268-384d47e31bf7-kube-api-access-p4c6x\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.081707 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.081718 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47e154a7-be17-400e-b268-384d47e31bf7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.094769 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47e154a7-be17-400e-b268-384d47e31bf7" (UID: "47e154a7-be17-400e-b268-384d47e31bf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.133916 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-config-data" (OuterVolumeSpecName: "config-data") pod "47e154a7-be17-400e-b268-384d47e31bf7" (UID: "47e154a7-be17-400e-b268-384d47e31bf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.162993 4718 generic.go:334] "Generic (PLEG): container finished" podID="47e154a7-be17-400e-b268-384d47e31bf7" containerID="3dd469f42748dfe81a915a49f871c71ccb21da5e26f11c1e82f8b68c565a0b18" exitCode=0 Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.163051 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47e154a7-be17-400e-b268-384d47e31bf7","Type":"ContainerDied","Data":"3dd469f42748dfe81a915a49f871c71ccb21da5e26f11c1e82f8b68c565a0b18"} Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.163063 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.163088 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47e154a7-be17-400e-b268-384d47e31bf7","Type":"ContainerDied","Data":"928a5a33195525866a21ad20ef6032af1e920a04ad2c031092adc9044f5a2d8f"} Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.163112 4718 scope.go:117] "RemoveContainer" containerID="6410293a9c41f8f1392481e4fb5425342129881d0542fbd978d3b6f15b8b7070" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.183777 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.183818 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47e154a7-be17-400e-b268-384d47e31bf7-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.225656 4718 scope.go:117] "RemoveContainer" containerID="157a6f569f802ad83239aa415e01878de84fd9941e17170133523f88cc64ad34" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.233496 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.256702 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.257129 4718 scope.go:117] "RemoveContainer" containerID="3dd469f42748dfe81a915a49f871c71ccb21da5e26f11c1e82f8b68c565a0b18" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.273941 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:41:35 crc kubenswrapper[4718]: E0123 16:41:35.274743 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e154a7-be17-400e-b268-384d47e31bf7" containerName="ceilometer-notification-agent" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.274768 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e154a7-be17-400e-b268-384d47e31bf7" containerName="ceilometer-notification-agent" Jan 23 16:41:35 crc kubenswrapper[4718]: E0123 16:41:35.274781 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e154a7-be17-400e-b268-384d47e31bf7" containerName="proxy-httpd" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.274791 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e154a7-be17-400e-b268-384d47e31bf7" containerName="proxy-httpd" Jan 23 16:41:35 crc kubenswrapper[4718]: E0123 16:41:35.274837 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e154a7-be17-400e-b268-384d47e31bf7" containerName="sg-core" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.274848 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e154a7-be17-400e-b268-384d47e31bf7" containerName="sg-core" Jan 23 16:41:35 crc kubenswrapper[4718]: E0123 16:41:35.274864 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e154a7-be17-400e-b268-384d47e31bf7" containerName="ceilometer-central-agent" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.274872 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e154a7-be17-400e-b268-384d47e31bf7" containerName="ceilometer-central-agent" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.275170 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e154a7-be17-400e-b268-384d47e31bf7" containerName="proxy-httpd" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.275223 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e154a7-be17-400e-b268-384d47e31bf7" containerName="sg-core" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.275240 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e154a7-be17-400e-b268-384d47e31bf7" containerName="ceilometer-central-agent" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.275255 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e154a7-be17-400e-b268-384d47e31bf7" containerName="ceilometer-notification-agent" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.278800 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.281451 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.281548 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.281658 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.289124 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.294704 4718 scope.go:117] "RemoveContainer" containerID="ab686f5b82d2864b8328c21499fc44f758d76099f003392f77359c9ae187bb01" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.336938 4718 scope.go:117] "RemoveContainer" containerID="6410293a9c41f8f1392481e4fb5425342129881d0542fbd978d3b6f15b8b7070" Jan 23 16:41:35 crc kubenswrapper[4718]: E0123 16:41:35.338023 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6410293a9c41f8f1392481e4fb5425342129881d0542fbd978d3b6f15b8b7070\": container with ID starting with 6410293a9c41f8f1392481e4fb5425342129881d0542fbd978d3b6f15b8b7070 not found: ID does not exist" containerID="6410293a9c41f8f1392481e4fb5425342129881d0542fbd978d3b6f15b8b7070" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.338078 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6410293a9c41f8f1392481e4fb5425342129881d0542fbd978d3b6f15b8b7070"} err="failed to get container status \"6410293a9c41f8f1392481e4fb5425342129881d0542fbd978d3b6f15b8b7070\": rpc error: code = NotFound desc = could not find container \"6410293a9c41f8f1392481e4fb5425342129881d0542fbd978d3b6f15b8b7070\": container with ID starting with 6410293a9c41f8f1392481e4fb5425342129881d0542fbd978d3b6f15b8b7070 not found: ID does not exist" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.338415 4718 scope.go:117] "RemoveContainer" containerID="157a6f569f802ad83239aa415e01878de84fd9941e17170133523f88cc64ad34" Jan 23 16:41:35 crc kubenswrapper[4718]: E0123 16:41:35.339956 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"157a6f569f802ad83239aa415e01878de84fd9941e17170133523f88cc64ad34\": container with ID starting with 157a6f569f802ad83239aa415e01878de84fd9941e17170133523f88cc64ad34 not found: ID does not exist" containerID="157a6f569f802ad83239aa415e01878de84fd9941e17170133523f88cc64ad34" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.340137 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"157a6f569f802ad83239aa415e01878de84fd9941e17170133523f88cc64ad34"} err="failed to get container status \"157a6f569f802ad83239aa415e01878de84fd9941e17170133523f88cc64ad34\": rpc error: code = NotFound desc = could not find container \"157a6f569f802ad83239aa415e01878de84fd9941e17170133523f88cc64ad34\": container with ID starting with 157a6f569f802ad83239aa415e01878de84fd9941e17170133523f88cc64ad34 not found: ID does not exist" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.340175 4718 scope.go:117] "RemoveContainer" containerID="3dd469f42748dfe81a915a49f871c71ccb21da5e26f11c1e82f8b68c565a0b18" Jan 23 16:41:35 crc kubenswrapper[4718]: E0123 16:41:35.342476 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd469f42748dfe81a915a49f871c71ccb21da5e26f11c1e82f8b68c565a0b18\": container with ID starting with 3dd469f42748dfe81a915a49f871c71ccb21da5e26f11c1e82f8b68c565a0b18 not found: ID does not exist" containerID="3dd469f42748dfe81a915a49f871c71ccb21da5e26f11c1e82f8b68c565a0b18" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.342519 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd469f42748dfe81a915a49f871c71ccb21da5e26f11c1e82f8b68c565a0b18"} err="failed to get container status \"3dd469f42748dfe81a915a49f871c71ccb21da5e26f11c1e82f8b68c565a0b18\": rpc error: code = NotFound desc = could not find container \"3dd469f42748dfe81a915a49f871c71ccb21da5e26f11c1e82f8b68c565a0b18\": container with ID starting with 3dd469f42748dfe81a915a49f871c71ccb21da5e26f11c1e82f8b68c565a0b18 not found: ID does not exist" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.342548 4718 scope.go:117] "RemoveContainer" containerID="ab686f5b82d2864b8328c21499fc44f758d76099f003392f77359c9ae187bb01" Jan 23 16:41:35 crc kubenswrapper[4718]: E0123 16:41:35.343025 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab686f5b82d2864b8328c21499fc44f758d76099f003392f77359c9ae187bb01\": container with ID starting with ab686f5b82d2864b8328c21499fc44f758d76099f003392f77359c9ae187bb01 not found: ID does not exist" containerID="ab686f5b82d2864b8328c21499fc44f758d76099f003392f77359c9ae187bb01" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.343091 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab686f5b82d2864b8328c21499fc44f758d76099f003392f77359c9ae187bb01"} err="failed to get container status \"ab686f5b82d2864b8328c21499fc44f758d76099f003392f77359c9ae187bb01\": rpc error: code = NotFound desc = could not find container \"ab686f5b82d2864b8328c21499fc44f758d76099f003392f77359c9ae187bb01\": container with ID starting with ab686f5b82d2864b8328c21499fc44f758d76099f003392f77359c9ae187bb01 not found: ID does not exist" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.388675 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.388917 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.389061 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-scripts\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.389460 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-config-data\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.389508 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e46965-9135-4846-9407-6eb50e290b89-log-httpd\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.389593 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztl9s\" (UniqueName: \"kubernetes.io/projected/c7e46965-9135-4846-9407-6eb50e290b89-kube-api-access-ztl9s\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.389719 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e46965-9135-4846-9407-6eb50e290b89-run-httpd\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.389760 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.492106 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-config-data\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.492162 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e46965-9135-4846-9407-6eb50e290b89-log-httpd\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.492211 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztl9s\" (UniqueName: \"kubernetes.io/projected/c7e46965-9135-4846-9407-6eb50e290b89-kube-api-access-ztl9s\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.492256 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e46965-9135-4846-9407-6eb50e290b89-run-httpd\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.492279 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.492316 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.492371 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.492410 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-scripts\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.493984 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e46965-9135-4846-9407-6eb50e290b89-log-httpd\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.494030 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e46965-9135-4846-9407-6eb50e290b89-run-httpd\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.499210 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.500525 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.500587 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-scripts\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.500948 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.502412 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-config-data\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.514729 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztl9s\" (UniqueName: \"kubernetes.io/projected/c7e46965-9135-4846-9407-6eb50e290b89-kube-api-access-ztl9s\") pod \"ceilometer-0\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " pod="openstack/ceilometer-0" Jan 23 16:41:35 crc kubenswrapper[4718]: I0123 16:41:35.610605 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:41:36 crc kubenswrapper[4718]: I0123 16:41:36.162195 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:41:36 crc kubenswrapper[4718]: I0123 16:41:36.177522 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e46965-9135-4846-9407-6eb50e290b89","Type":"ContainerStarted","Data":"01a38d0bb5f1c8c5c1017e1b95efbbdccb800ecf49894d2a4bf62581196d1302"} Jan 23 16:41:37 crc kubenswrapper[4718]: I0123 16:41:37.161446 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47e154a7-be17-400e-b268-384d47e31bf7" path="/var/lib/kubelet/pods/47e154a7-be17-400e-b268-384d47e31bf7/volumes" Jan 23 16:41:37 crc kubenswrapper[4718]: I0123 16:41:37.206671 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e46965-9135-4846-9407-6eb50e290b89","Type":"ContainerStarted","Data":"6a929c4b61cb26cc2c16730b72ef926ed9208dbddfdb07808de3e1983b48d9bd"} Jan 23 16:41:37 crc kubenswrapper[4718]: I0123 16:41:37.282044 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9fshn" Jan 23 16:41:37 crc kubenswrapper[4718]: I0123 16:41:37.282235 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9fshn" Jan 23 16:41:37 crc kubenswrapper[4718]: I0123 16:41:37.378855 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9fshn" Jan 23 16:41:38 crc kubenswrapper[4718]: I0123 16:41:38.229045 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e46965-9135-4846-9407-6eb50e290b89","Type":"ContainerStarted","Data":"b923679cc3bd4ede4a4edb6dae93e256c47f0fec8e35787d37edd00d396bcb01"} Jan 23 16:41:38 crc kubenswrapper[4718]: I0123 16:41:38.296467 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9fshn" Jan 23 16:41:38 crc kubenswrapper[4718]: I0123 16:41:38.356041 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9fshn"] Jan 23 16:41:39 crc kubenswrapper[4718]: I0123 16:41:39.244922 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e46965-9135-4846-9407-6eb50e290b89","Type":"ContainerStarted","Data":"afd66fa255f3aef03ebeee22ea4f4930ce7a02e78f7b16a7a1503c383c258b94"} Jan 23 16:41:39 crc kubenswrapper[4718]: I0123 16:41:39.324622 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 16:41:40 crc kubenswrapper[4718]: I0123 16:41:40.255858 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9fshn" podUID="a2b35ff0-0e22-4d60-a4dc-9b724ea67064" containerName="registry-server" containerID="cri-o://3e86756690044f89fbc0b258c3565e503ec9fb3bd85c5826dc5ff23abf418fc0" gracePeriod=2 Jan 23 16:41:40 crc kubenswrapper[4718]: I0123 16:41:40.793031 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9fshn" Jan 23 16:41:40 crc kubenswrapper[4718]: I0123 16:41:40.943605 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2b35ff0-0e22-4d60-a4dc-9b724ea67064-catalog-content\") pod \"a2b35ff0-0e22-4d60-a4dc-9b724ea67064\" (UID: \"a2b35ff0-0e22-4d60-a4dc-9b724ea67064\") " Jan 23 16:41:40 crc kubenswrapper[4718]: I0123 16:41:40.943889 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2b35ff0-0e22-4d60-a4dc-9b724ea67064-utilities\") pod \"a2b35ff0-0e22-4d60-a4dc-9b724ea67064\" (UID: \"a2b35ff0-0e22-4d60-a4dc-9b724ea67064\") " Jan 23 16:41:40 crc kubenswrapper[4718]: I0123 16:41:40.943972 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lvn6\" (UniqueName: \"kubernetes.io/projected/a2b35ff0-0e22-4d60-a4dc-9b724ea67064-kube-api-access-6lvn6\") pod \"a2b35ff0-0e22-4d60-a4dc-9b724ea67064\" (UID: \"a2b35ff0-0e22-4d60-a4dc-9b724ea67064\") " Jan 23 16:41:40 crc kubenswrapper[4718]: I0123 16:41:40.952080 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2b35ff0-0e22-4d60-a4dc-9b724ea67064-utilities" (OuterVolumeSpecName: "utilities") pod "a2b35ff0-0e22-4d60-a4dc-9b724ea67064" (UID: "a2b35ff0-0e22-4d60-a4dc-9b724ea67064"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:41:40 crc kubenswrapper[4718]: I0123 16:41:40.961664 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2b35ff0-0e22-4d60-a4dc-9b724ea67064-kube-api-access-6lvn6" (OuterVolumeSpecName: "kube-api-access-6lvn6") pod "a2b35ff0-0e22-4d60-a4dc-9b724ea67064" (UID: "a2b35ff0-0e22-4d60-a4dc-9b724ea67064"). InnerVolumeSpecName "kube-api-access-6lvn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:41:40 crc kubenswrapper[4718]: I0123 16:41:40.969386 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2b35ff0-0e22-4d60-a4dc-9b724ea67064-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:40 crc kubenswrapper[4718]: I0123 16:41:40.969421 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lvn6\" (UniqueName: \"kubernetes.io/projected/a2b35ff0-0e22-4d60-a4dc-9b724ea67064-kube-api-access-6lvn6\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.036501 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2b35ff0-0e22-4d60-a4dc-9b724ea67064-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2b35ff0-0e22-4d60-a4dc-9b724ea67064" (UID: "a2b35ff0-0e22-4d60-a4dc-9b724ea67064"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.071767 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2b35ff0-0e22-4d60-a4dc-9b724ea67064-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.270991 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e46965-9135-4846-9407-6eb50e290b89","Type":"ContainerStarted","Data":"63b4109f388b5be61f14c0d382b1efbd0699d59e492e3614b312978a94442cee"} Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.271167 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.273882 4718 generic.go:334] "Generic (PLEG): container finished" podID="a2b35ff0-0e22-4d60-a4dc-9b724ea67064" containerID="3e86756690044f89fbc0b258c3565e503ec9fb3bd85c5826dc5ff23abf418fc0" exitCode=0 Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.273918 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fshn" event={"ID":"a2b35ff0-0e22-4d60-a4dc-9b724ea67064","Type":"ContainerDied","Data":"3e86756690044f89fbc0b258c3565e503ec9fb3bd85c5826dc5ff23abf418fc0"} Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.273938 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fshn" event={"ID":"a2b35ff0-0e22-4d60-a4dc-9b724ea67064","Type":"ContainerDied","Data":"01dd39ac01bb1b80c52f3092828e609bc43a2c778cb0cc6a52fd491e80b0ea85"} Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.273954 4718 scope.go:117] "RemoveContainer" containerID="3e86756690044f89fbc0b258c3565e503ec9fb3bd85c5826dc5ff23abf418fc0" Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.274049 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9fshn" Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.302893 4718 scope.go:117] "RemoveContainer" containerID="fe534dffc4a32b3f054e551210dac1b1286a0afb2e0978bac674ea22598e5775" Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.308754 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.525207312 podStartE2EDuration="6.308730195s" podCreationTimestamp="2026-01-23 16:41:35 +0000 UTC" firstStartedPulling="2026-01-23 16:41:36.158274898 +0000 UTC m=+1497.305516889" lastFinishedPulling="2026-01-23 16:41:39.941797781 +0000 UTC m=+1501.089039772" observedRunningTime="2026-01-23 16:41:41.294283688 +0000 UTC m=+1502.441525679" watchObservedRunningTime="2026-01-23 16:41:41.308730195 +0000 UTC m=+1502.455972196" Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.333702 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9fshn"] Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.333941 4718 scope.go:117] "RemoveContainer" containerID="e9ff2a380465ee68e8f85eea248e87f74ff2434a17996e9896666049aab70836" Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.339111 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9fshn"] Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.386201 4718 scope.go:117] "RemoveContainer" containerID="3e86756690044f89fbc0b258c3565e503ec9fb3bd85c5826dc5ff23abf418fc0" Jan 23 16:41:41 crc kubenswrapper[4718]: E0123 16:41:41.386729 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e86756690044f89fbc0b258c3565e503ec9fb3bd85c5826dc5ff23abf418fc0\": container with ID starting with 3e86756690044f89fbc0b258c3565e503ec9fb3bd85c5826dc5ff23abf418fc0 not found: ID does not exist" containerID="3e86756690044f89fbc0b258c3565e503ec9fb3bd85c5826dc5ff23abf418fc0" Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.386774 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e86756690044f89fbc0b258c3565e503ec9fb3bd85c5826dc5ff23abf418fc0"} err="failed to get container status \"3e86756690044f89fbc0b258c3565e503ec9fb3bd85c5826dc5ff23abf418fc0\": rpc error: code = NotFound desc = could not find container \"3e86756690044f89fbc0b258c3565e503ec9fb3bd85c5826dc5ff23abf418fc0\": container with ID starting with 3e86756690044f89fbc0b258c3565e503ec9fb3bd85c5826dc5ff23abf418fc0 not found: ID does not exist" Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.386802 4718 scope.go:117] "RemoveContainer" containerID="fe534dffc4a32b3f054e551210dac1b1286a0afb2e0978bac674ea22598e5775" Jan 23 16:41:41 crc kubenswrapper[4718]: E0123 16:41:41.387083 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe534dffc4a32b3f054e551210dac1b1286a0afb2e0978bac674ea22598e5775\": container with ID starting with fe534dffc4a32b3f054e551210dac1b1286a0afb2e0978bac674ea22598e5775 not found: ID does not exist" containerID="fe534dffc4a32b3f054e551210dac1b1286a0afb2e0978bac674ea22598e5775" Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.387142 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe534dffc4a32b3f054e551210dac1b1286a0afb2e0978bac674ea22598e5775"} err="failed to get container status \"fe534dffc4a32b3f054e551210dac1b1286a0afb2e0978bac674ea22598e5775\": rpc error: code = NotFound desc = could not find container \"fe534dffc4a32b3f054e551210dac1b1286a0afb2e0978bac674ea22598e5775\": container with ID starting with fe534dffc4a32b3f054e551210dac1b1286a0afb2e0978bac674ea22598e5775 not found: ID does not exist" Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.387182 4718 scope.go:117] "RemoveContainer" containerID="e9ff2a380465ee68e8f85eea248e87f74ff2434a17996e9896666049aab70836" Jan 23 16:41:41 crc kubenswrapper[4718]: E0123 16:41:41.387530 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9ff2a380465ee68e8f85eea248e87f74ff2434a17996e9896666049aab70836\": container with ID starting with e9ff2a380465ee68e8f85eea248e87f74ff2434a17996e9896666049aab70836 not found: ID does not exist" containerID="e9ff2a380465ee68e8f85eea248e87f74ff2434a17996e9896666049aab70836" Jan 23 16:41:41 crc kubenswrapper[4718]: I0123 16:41:41.387564 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9ff2a380465ee68e8f85eea248e87f74ff2434a17996e9896666049aab70836"} err="failed to get container status \"e9ff2a380465ee68e8f85eea248e87f74ff2434a17996e9896666049aab70836\": rpc error: code = NotFound desc = could not find container \"e9ff2a380465ee68e8f85eea248e87f74ff2434a17996e9896666049aab70836\": container with ID starting with e9ff2a380465ee68e8f85eea248e87f74ff2434a17996e9896666049aab70836 not found: ID does not exist" Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.047508 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sl7lk"] Jan 23 16:41:43 crc kubenswrapper[4718]: E0123 16:41:43.048405 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b35ff0-0e22-4d60-a4dc-9b724ea67064" containerName="extract-utilities" Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.048423 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b35ff0-0e22-4d60-a4dc-9b724ea67064" containerName="extract-utilities" Jan 23 16:41:43 crc kubenswrapper[4718]: E0123 16:41:43.048467 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b35ff0-0e22-4d60-a4dc-9b724ea67064" containerName="registry-server" Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.048475 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b35ff0-0e22-4d60-a4dc-9b724ea67064" containerName="registry-server" Jan 23 16:41:43 crc kubenswrapper[4718]: E0123 16:41:43.048512 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b35ff0-0e22-4d60-a4dc-9b724ea67064" containerName="extract-content" Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.048522 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b35ff0-0e22-4d60-a4dc-9b724ea67064" containerName="extract-content" Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.048833 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2b35ff0-0e22-4d60-a4dc-9b724ea67064" containerName="registry-server" Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.051025 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sl7lk" Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.064067 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sl7lk"] Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.152473 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2b35ff0-0e22-4d60-a4dc-9b724ea67064" path="/var/lib/kubelet/pods/a2b35ff0-0e22-4d60-a4dc-9b724ea67064/volumes" Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.227068 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf8ct\" (UniqueName: \"kubernetes.io/projected/5016935d-04f7-4a5c-a13d-365d574662e5-kube-api-access-qf8ct\") pod \"community-operators-sl7lk\" (UID: \"5016935d-04f7-4a5c-a13d-365d574662e5\") " pod="openshift-marketplace/community-operators-sl7lk" Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.227168 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5016935d-04f7-4a5c-a13d-365d574662e5-utilities\") pod \"community-operators-sl7lk\" (UID: \"5016935d-04f7-4a5c-a13d-365d574662e5\") " pod="openshift-marketplace/community-operators-sl7lk" Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.227364 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5016935d-04f7-4a5c-a13d-365d574662e5-catalog-content\") pod \"community-operators-sl7lk\" (UID: \"5016935d-04f7-4a5c-a13d-365d574662e5\") " pod="openshift-marketplace/community-operators-sl7lk" Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.329992 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5016935d-04f7-4a5c-a13d-365d574662e5-utilities\") pod \"community-operators-sl7lk\" (UID: \"5016935d-04f7-4a5c-a13d-365d574662e5\") " pod="openshift-marketplace/community-operators-sl7lk" Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.330169 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5016935d-04f7-4a5c-a13d-365d574662e5-catalog-content\") pod \"community-operators-sl7lk\" (UID: \"5016935d-04f7-4a5c-a13d-365d574662e5\") " pod="openshift-marketplace/community-operators-sl7lk" Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.330280 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf8ct\" (UniqueName: \"kubernetes.io/projected/5016935d-04f7-4a5c-a13d-365d574662e5-kube-api-access-qf8ct\") pod \"community-operators-sl7lk\" (UID: \"5016935d-04f7-4a5c-a13d-365d574662e5\") " pod="openshift-marketplace/community-operators-sl7lk" Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.330538 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5016935d-04f7-4a5c-a13d-365d574662e5-utilities\") pod \"community-operators-sl7lk\" (UID: \"5016935d-04f7-4a5c-a13d-365d574662e5\") " pod="openshift-marketplace/community-operators-sl7lk" Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.330799 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5016935d-04f7-4a5c-a13d-365d574662e5-catalog-content\") pod \"community-operators-sl7lk\" (UID: \"5016935d-04f7-4a5c-a13d-365d574662e5\") " pod="openshift-marketplace/community-operators-sl7lk" Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.352737 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf8ct\" (UniqueName: \"kubernetes.io/projected/5016935d-04f7-4a5c-a13d-365d574662e5-kube-api-access-qf8ct\") pod \"community-operators-sl7lk\" (UID: \"5016935d-04f7-4a5c-a13d-365d574662e5\") " pod="openshift-marketplace/community-operators-sl7lk" Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.378888 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sl7lk" Jan 23 16:41:43 crc kubenswrapper[4718]: I0123 16:41:43.975928 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sl7lk"] Jan 23 16:41:44 crc kubenswrapper[4718]: I0123 16:41:44.311114 4718 generic.go:334] "Generic (PLEG): container finished" podID="5016935d-04f7-4a5c-a13d-365d574662e5" containerID="84dbda573b94e6e7f34185f93a795fa32b364dd2ce07f25ee9bfbbbebaaa64bd" exitCode=0 Jan 23 16:41:44 crc kubenswrapper[4718]: I0123 16:41:44.311160 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sl7lk" event={"ID":"5016935d-04f7-4a5c-a13d-365d574662e5","Type":"ContainerDied","Data":"84dbda573b94e6e7f34185f93a795fa32b364dd2ce07f25ee9bfbbbebaaa64bd"} Jan 23 16:41:44 crc kubenswrapper[4718]: I0123 16:41:44.311189 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sl7lk" event={"ID":"5016935d-04f7-4a5c-a13d-365d574662e5","Type":"ContainerStarted","Data":"f5aa707217355140a3de8576afeb6eab6964c1ad840093bd18916f9a27ee4755"} Jan 23 16:41:45 crc kubenswrapper[4718]: I0123 16:41:45.330886 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sl7lk" event={"ID":"5016935d-04f7-4a5c-a13d-365d574662e5","Type":"ContainerStarted","Data":"76c9fe24b2affead34aef127a33bab1d87bdd8337c4afba060b929e13ed74a20"} Jan 23 16:41:47 crc kubenswrapper[4718]: I0123 16:41:47.353822 4718 generic.go:334] "Generic (PLEG): container finished" podID="5016935d-04f7-4a5c-a13d-365d574662e5" containerID="76c9fe24b2affead34aef127a33bab1d87bdd8337c4afba060b929e13ed74a20" exitCode=0 Jan 23 16:41:47 crc kubenswrapper[4718]: I0123 16:41:47.353899 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sl7lk" event={"ID":"5016935d-04f7-4a5c-a13d-365d574662e5","Type":"ContainerDied","Data":"76c9fe24b2affead34aef127a33bab1d87bdd8337c4afba060b929e13ed74a20"} Jan 23 16:41:48 crc kubenswrapper[4718]: I0123 16:41:48.053381 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-62ff5"] Jan 23 16:41:48 crc kubenswrapper[4718]: I0123 16:41:48.056252 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62ff5" Jan 23 16:41:48 crc kubenswrapper[4718]: I0123 16:41:48.074389 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-62ff5"] Jan 23 16:41:48 crc kubenswrapper[4718]: I0123 16:41:48.161559 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5069ba53-2163-40eb-98d3-96c5724f5264-utilities\") pod \"redhat-marketplace-62ff5\" (UID: \"5069ba53-2163-40eb-98d3-96c5724f5264\") " pod="openshift-marketplace/redhat-marketplace-62ff5" Jan 23 16:41:48 crc kubenswrapper[4718]: I0123 16:41:48.162187 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5069ba53-2163-40eb-98d3-96c5724f5264-catalog-content\") pod \"redhat-marketplace-62ff5\" (UID: \"5069ba53-2163-40eb-98d3-96c5724f5264\") " pod="openshift-marketplace/redhat-marketplace-62ff5" Jan 23 16:41:48 crc kubenswrapper[4718]: I0123 16:41:48.162331 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgc9l\" (UniqueName: \"kubernetes.io/projected/5069ba53-2163-40eb-98d3-96c5724f5264-kube-api-access-wgc9l\") pod \"redhat-marketplace-62ff5\" (UID: \"5069ba53-2163-40eb-98d3-96c5724f5264\") " pod="openshift-marketplace/redhat-marketplace-62ff5" Jan 23 16:41:48 crc kubenswrapper[4718]: I0123 16:41:48.264543 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5069ba53-2163-40eb-98d3-96c5724f5264-catalog-content\") pod \"redhat-marketplace-62ff5\" (UID: \"5069ba53-2163-40eb-98d3-96c5724f5264\") " pod="openshift-marketplace/redhat-marketplace-62ff5" Jan 23 16:41:48 crc kubenswrapper[4718]: I0123 16:41:48.264735 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgc9l\" (UniqueName: \"kubernetes.io/projected/5069ba53-2163-40eb-98d3-96c5724f5264-kube-api-access-wgc9l\") pod \"redhat-marketplace-62ff5\" (UID: \"5069ba53-2163-40eb-98d3-96c5724f5264\") " pod="openshift-marketplace/redhat-marketplace-62ff5" Jan 23 16:41:48 crc kubenswrapper[4718]: I0123 16:41:48.264833 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5069ba53-2163-40eb-98d3-96c5724f5264-utilities\") pod \"redhat-marketplace-62ff5\" (UID: \"5069ba53-2163-40eb-98d3-96c5724f5264\") " pod="openshift-marketplace/redhat-marketplace-62ff5" Jan 23 16:41:48 crc kubenswrapper[4718]: I0123 16:41:48.265395 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5069ba53-2163-40eb-98d3-96c5724f5264-utilities\") pod \"redhat-marketplace-62ff5\" (UID: \"5069ba53-2163-40eb-98d3-96c5724f5264\") " pod="openshift-marketplace/redhat-marketplace-62ff5" Jan 23 16:41:48 crc kubenswrapper[4718]: I0123 16:41:48.267241 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5069ba53-2163-40eb-98d3-96c5724f5264-catalog-content\") pod \"redhat-marketplace-62ff5\" (UID: \"5069ba53-2163-40eb-98d3-96c5724f5264\") " pod="openshift-marketplace/redhat-marketplace-62ff5" Jan 23 16:41:48 crc kubenswrapper[4718]: I0123 16:41:48.291204 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgc9l\" (UniqueName: \"kubernetes.io/projected/5069ba53-2163-40eb-98d3-96c5724f5264-kube-api-access-wgc9l\") pod \"redhat-marketplace-62ff5\" (UID: \"5069ba53-2163-40eb-98d3-96c5724f5264\") " pod="openshift-marketplace/redhat-marketplace-62ff5" Jan 23 16:41:48 crc kubenswrapper[4718]: I0123 16:41:48.371232 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sl7lk" event={"ID":"5016935d-04f7-4a5c-a13d-365d574662e5","Type":"ContainerStarted","Data":"54ee1fbde2e3c63da3e9aed984c01f2054392f456ed0d8d7bc372eea4c83f0b3"} Jan 23 16:41:48 crc kubenswrapper[4718]: I0123 16:41:48.381155 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62ff5" Jan 23 16:41:48 crc kubenswrapper[4718]: I0123 16:41:48.407310 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sl7lk" podStartSLOduration=1.958897404 podStartE2EDuration="5.407279827s" podCreationTimestamp="2026-01-23 16:41:43 +0000 UTC" firstStartedPulling="2026-01-23 16:41:44.31306326 +0000 UTC m=+1505.460305251" lastFinishedPulling="2026-01-23 16:41:47.761445683 +0000 UTC m=+1508.908687674" observedRunningTime="2026-01-23 16:41:48.392759678 +0000 UTC m=+1509.540001669" watchObservedRunningTime="2026-01-23 16:41:48.407279827 +0000 UTC m=+1509.554521818" Jan 23 16:41:48 crc kubenswrapper[4718]: I0123 16:41:48.966380 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-62ff5"] Jan 23 16:41:49 crc kubenswrapper[4718]: I0123 16:41:49.385239 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62ff5" event={"ID":"5069ba53-2163-40eb-98d3-96c5724f5264","Type":"ContainerStarted","Data":"aaefc1f53a672bbe3c145f4544018ede931f7344fbad1be9268e25a6b3a5b083"} Jan 23 16:41:50 crc kubenswrapper[4718]: I0123 16:41:50.407933 4718 generic.go:334] "Generic (PLEG): container finished" podID="5069ba53-2163-40eb-98d3-96c5724f5264" containerID="c8208aba4cf3bd6dca4632c523d9da483cb23b86a073551b3a061168ef4194a2" exitCode=0 Jan 23 16:41:50 crc kubenswrapper[4718]: I0123 16:41:50.408104 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62ff5" event={"ID":"5069ba53-2163-40eb-98d3-96c5724f5264","Type":"ContainerDied","Data":"c8208aba4cf3bd6dca4632c523d9da483cb23b86a073551b3a061168ef4194a2"} Jan 23 16:41:51 crc kubenswrapper[4718]: I0123 16:41:51.422745 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62ff5" event={"ID":"5069ba53-2163-40eb-98d3-96c5724f5264","Type":"ContainerStarted","Data":"5e9977e5cb6c248419057d894959f558ec3248744b6be5fb26063c3026d96f5b"} Jan 23 16:41:52 crc kubenswrapper[4718]: I0123 16:41:52.436617 4718 generic.go:334] "Generic (PLEG): container finished" podID="5069ba53-2163-40eb-98d3-96c5724f5264" containerID="5e9977e5cb6c248419057d894959f558ec3248744b6be5fb26063c3026d96f5b" exitCode=0 Jan 23 16:41:52 crc kubenswrapper[4718]: I0123 16:41:52.436734 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62ff5" event={"ID":"5069ba53-2163-40eb-98d3-96c5724f5264","Type":"ContainerDied","Data":"5e9977e5cb6c248419057d894959f558ec3248744b6be5fb26063c3026d96f5b"} Jan 23 16:41:53 crc kubenswrapper[4718]: I0123 16:41:53.379992 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sl7lk" Jan 23 16:41:53 crc kubenswrapper[4718]: I0123 16:41:53.380569 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sl7lk" Jan 23 16:41:53 crc kubenswrapper[4718]: I0123 16:41:53.445103 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sl7lk" Jan 23 16:41:53 crc kubenswrapper[4718]: I0123 16:41:53.451253 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62ff5" event={"ID":"5069ba53-2163-40eb-98d3-96c5724f5264","Type":"ContainerStarted","Data":"3429cb64a95eba56e059810de169b8d006087d72abcd1bad74280acfbc48c768"} Jan 23 16:41:53 crc kubenswrapper[4718]: I0123 16:41:53.507303 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-62ff5" podStartSLOduration=3.116925017 podStartE2EDuration="5.507276451s" podCreationTimestamp="2026-01-23 16:41:48 +0000 UTC" firstStartedPulling="2026-01-23 16:41:50.41088793 +0000 UTC m=+1511.558129921" lastFinishedPulling="2026-01-23 16:41:52.801239374 +0000 UTC m=+1513.948481355" observedRunningTime="2026-01-23 16:41:53.482870438 +0000 UTC m=+1514.630112439" watchObservedRunningTime="2026-01-23 16:41:53.507276451 +0000 UTC m=+1514.654518442" Jan 23 16:41:53 crc kubenswrapper[4718]: I0123 16:41:53.523774 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sl7lk" Jan 23 16:41:55 crc kubenswrapper[4718]: I0123 16:41:55.840448 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sl7lk"] Jan 23 16:41:55 crc kubenswrapper[4718]: I0123 16:41:55.841294 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sl7lk" podUID="5016935d-04f7-4a5c-a13d-365d574662e5" containerName="registry-server" containerID="cri-o://54ee1fbde2e3c63da3e9aed984c01f2054392f456ed0d8d7bc372eea4c83f0b3" gracePeriod=2 Jan 23 16:41:56 crc kubenswrapper[4718]: I0123 16:41:56.494031 4718 generic.go:334] "Generic (PLEG): container finished" podID="5016935d-04f7-4a5c-a13d-365d574662e5" containerID="54ee1fbde2e3c63da3e9aed984c01f2054392f456ed0d8d7bc372eea4c83f0b3" exitCode=0 Jan 23 16:41:56 crc kubenswrapper[4718]: I0123 16:41:56.494268 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sl7lk" event={"ID":"5016935d-04f7-4a5c-a13d-365d574662e5","Type":"ContainerDied","Data":"54ee1fbde2e3c63da3e9aed984c01f2054392f456ed0d8d7bc372eea4c83f0b3"} Jan 23 16:41:56 crc kubenswrapper[4718]: I0123 16:41:56.687230 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sl7lk" Jan 23 16:41:56 crc kubenswrapper[4718]: I0123 16:41:56.825352 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5016935d-04f7-4a5c-a13d-365d574662e5-utilities\") pod \"5016935d-04f7-4a5c-a13d-365d574662e5\" (UID: \"5016935d-04f7-4a5c-a13d-365d574662e5\") " Jan 23 16:41:56 crc kubenswrapper[4718]: I0123 16:41:56.826055 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5016935d-04f7-4a5c-a13d-365d574662e5-catalog-content\") pod \"5016935d-04f7-4a5c-a13d-365d574662e5\" (UID: \"5016935d-04f7-4a5c-a13d-365d574662e5\") " Jan 23 16:41:56 crc kubenswrapper[4718]: I0123 16:41:56.826302 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf8ct\" (UniqueName: \"kubernetes.io/projected/5016935d-04f7-4a5c-a13d-365d574662e5-kube-api-access-qf8ct\") pod \"5016935d-04f7-4a5c-a13d-365d574662e5\" (UID: \"5016935d-04f7-4a5c-a13d-365d574662e5\") " Jan 23 16:41:56 crc kubenswrapper[4718]: I0123 16:41:56.827514 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5016935d-04f7-4a5c-a13d-365d574662e5-utilities" (OuterVolumeSpecName: "utilities") pod "5016935d-04f7-4a5c-a13d-365d574662e5" (UID: "5016935d-04f7-4a5c-a13d-365d574662e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:41:56 crc kubenswrapper[4718]: I0123 16:41:56.833675 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5016935d-04f7-4a5c-a13d-365d574662e5-kube-api-access-qf8ct" (OuterVolumeSpecName: "kube-api-access-qf8ct") pod "5016935d-04f7-4a5c-a13d-365d574662e5" (UID: "5016935d-04f7-4a5c-a13d-365d574662e5"). InnerVolumeSpecName "kube-api-access-qf8ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:41:56 crc kubenswrapper[4718]: I0123 16:41:56.873375 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5016935d-04f7-4a5c-a13d-365d574662e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5016935d-04f7-4a5c-a13d-365d574662e5" (UID: "5016935d-04f7-4a5c-a13d-365d574662e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:41:56 crc kubenswrapper[4718]: I0123 16:41:56.929271 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5016935d-04f7-4a5c-a13d-365d574662e5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:56 crc kubenswrapper[4718]: I0123 16:41:56.929304 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5016935d-04f7-4a5c-a13d-365d574662e5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:56 crc kubenswrapper[4718]: I0123 16:41:56.929316 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf8ct\" (UniqueName: \"kubernetes.io/projected/5016935d-04f7-4a5c-a13d-365d574662e5-kube-api-access-qf8ct\") on node \"crc\" DevicePath \"\"" Jan 23 16:41:57 crc kubenswrapper[4718]: I0123 16:41:57.516308 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sl7lk" event={"ID":"5016935d-04f7-4a5c-a13d-365d574662e5","Type":"ContainerDied","Data":"f5aa707217355140a3de8576afeb6eab6964c1ad840093bd18916f9a27ee4755"} Jan 23 16:41:57 crc kubenswrapper[4718]: I0123 16:41:57.516851 4718 scope.go:117] "RemoveContainer" containerID="54ee1fbde2e3c63da3e9aed984c01f2054392f456ed0d8d7bc372eea4c83f0b3" Jan 23 16:41:57 crc kubenswrapper[4718]: I0123 16:41:57.516380 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sl7lk" Jan 23 16:41:57 crc kubenswrapper[4718]: I0123 16:41:57.547602 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sl7lk"] Jan 23 16:41:57 crc kubenswrapper[4718]: I0123 16:41:57.551451 4718 scope.go:117] "RemoveContainer" containerID="76c9fe24b2affead34aef127a33bab1d87bdd8337c4afba060b929e13ed74a20" Jan 23 16:41:57 crc kubenswrapper[4718]: I0123 16:41:57.560278 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sl7lk"] Jan 23 16:41:57 crc kubenswrapper[4718]: I0123 16:41:57.579889 4718 scope.go:117] "RemoveContainer" containerID="84dbda573b94e6e7f34185f93a795fa32b364dd2ce07f25ee9bfbbbebaaa64bd" Jan 23 16:41:58 crc kubenswrapper[4718]: I0123 16:41:58.381367 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-62ff5" Jan 23 16:41:58 crc kubenswrapper[4718]: I0123 16:41:58.382613 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-62ff5" Jan 23 16:41:58 crc kubenswrapper[4718]: I0123 16:41:58.455776 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-62ff5" Jan 23 16:41:58 crc kubenswrapper[4718]: I0123 16:41:58.586792 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-62ff5" Jan 23 16:41:59 crc kubenswrapper[4718]: I0123 16:41:59.162904 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5016935d-04f7-4a5c-a13d-365d574662e5" path="/var/lib/kubelet/pods/5016935d-04f7-4a5c-a13d-365d574662e5/volumes" Jan 23 16:42:00 crc kubenswrapper[4718]: I0123 16:42:00.645795 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-62ff5"] Jan 23 16:42:01 crc kubenswrapper[4718]: I0123 16:42:01.562348 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-62ff5" podUID="5069ba53-2163-40eb-98d3-96c5724f5264" containerName="registry-server" containerID="cri-o://3429cb64a95eba56e059810de169b8d006087d72abcd1bad74280acfbc48c768" gracePeriod=2 Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.240227 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62ff5" Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.386238 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5069ba53-2163-40eb-98d3-96c5724f5264-catalog-content\") pod \"5069ba53-2163-40eb-98d3-96c5724f5264\" (UID: \"5069ba53-2163-40eb-98d3-96c5724f5264\") " Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.388077 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5069ba53-2163-40eb-98d3-96c5724f5264-utilities\") pod \"5069ba53-2163-40eb-98d3-96c5724f5264\" (UID: \"5069ba53-2163-40eb-98d3-96c5724f5264\") " Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.388260 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgc9l\" (UniqueName: \"kubernetes.io/projected/5069ba53-2163-40eb-98d3-96c5724f5264-kube-api-access-wgc9l\") pod \"5069ba53-2163-40eb-98d3-96c5724f5264\" (UID: \"5069ba53-2163-40eb-98d3-96c5724f5264\") " Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.389233 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5069ba53-2163-40eb-98d3-96c5724f5264-utilities" (OuterVolumeSpecName: "utilities") pod "5069ba53-2163-40eb-98d3-96c5724f5264" (UID: "5069ba53-2163-40eb-98d3-96c5724f5264"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.395163 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5069ba53-2163-40eb-98d3-96c5724f5264-kube-api-access-wgc9l" (OuterVolumeSpecName: "kube-api-access-wgc9l") pod "5069ba53-2163-40eb-98d3-96c5724f5264" (UID: "5069ba53-2163-40eb-98d3-96c5724f5264"). InnerVolumeSpecName "kube-api-access-wgc9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.411303 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5069ba53-2163-40eb-98d3-96c5724f5264-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5069ba53-2163-40eb-98d3-96c5724f5264" (UID: "5069ba53-2163-40eb-98d3-96c5724f5264"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.491227 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgc9l\" (UniqueName: \"kubernetes.io/projected/5069ba53-2163-40eb-98d3-96c5724f5264-kube-api-access-wgc9l\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.491279 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5069ba53-2163-40eb-98d3-96c5724f5264-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.491288 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5069ba53-2163-40eb-98d3-96c5724f5264-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.583698 4718 generic.go:334] "Generic (PLEG): container finished" podID="5069ba53-2163-40eb-98d3-96c5724f5264" containerID="3429cb64a95eba56e059810de169b8d006087d72abcd1bad74280acfbc48c768" exitCode=0 Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.583802 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62ff5" event={"ID":"5069ba53-2163-40eb-98d3-96c5724f5264","Type":"ContainerDied","Data":"3429cb64a95eba56e059810de169b8d006087d72abcd1bad74280acfbc48c768"} Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.584297 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62ff5" event={"ID":"5069ba53-2163-40eb-98d3-96c5724f5264","Type":"ContainerDied","Data":"aaefc1f53a672bbe3c145f4544018ede931f7344fbad1be9268e25a6b3a5b083"} Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.584362 4718 scope.go:117] "RemoveContainer" containerID="3429cb64a95eba56e059810de169b8d006087d72abcd1bad74280acfbc48c768" Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.583827 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62ff5" Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.656974 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-62ff5"] Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.664330 4718 scope.go:117] "RemoveContainer" containerID="5e9977e5cb6c248419057d894959f558ec3248744b6be5fb26063c3026d96f5b" Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.671308 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-62ff5"] Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.691432 4718 scope.go:117] "RemoveContainer" containerID="c8208aba4cf3bd6dca4632c523d9da483cb23b86a073551b3a061168ef4194a2" Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.755714 4718 scope.go:117] "RemoveContainer" containerID="3429cb64a95eba56e059810de169b8d006087d72abcd1bad74280acfbc48c768" Jan 23 16:42:02 crc kubenswrapper[4718]: E0123 16:42:02.756298 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3429cb64a95eba56e059810de169b8d006087d72abcd1bad74280acfbc48c768\": container with ID starting with 3429cb64a95eba56e059810de169b8d006087d72abcd1bad74280acfbc48c768 not found: ID does not exist" containerID="3429cb64a95eba56e059810de169b8d006087d72abcd1bad74280acfbc48c768" Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.756534 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3429cb64a95eba56e059810de169b8d006087d72abcd1bad74280acfbc48c768"} err="failed to get container status \"3429cb64a95eba56e059810de169b8d006087d72abcd1bad74280acfbc48c768\": rpc error: code = NotFound desc = could not find container \"3429cb64a95eba56e059810de169b8d006087d72abcd1bad74280acfbc48c768\": container with ID starting with 3429cb64a95eba56e059810de169b8d006087d72abcd1bad74280acfbc48c768 not found: ID does not exist" Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.756648 4718 scope.go:117] "RemoveContainer" containerID="5e9977e5cb6c248419057d894959f558ec3248744b6be5fb26063c3026d96f5b" Jan 23 16:42:02 crc kubenswrapper[4718]: E0123 16:42:02.756984 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e9977e5cb6c248419057d894959f558ec3248744b6be5fb26063c3026d96f5b\": container with ID starting with 5e9977e5cb6c248419057d894959f558ec3248744b6be5fb26063c3026d96f5b not found: ID does not exist" containerID="5e9977e5cb6c248419057d894959f558ec3248744b6be5fb26063c3026d96f5b" Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.757029 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e9977e5cb6c248419057d894959f558ec3248744b6be5fb26063c3026d96f5b"} err="failed to get container status \"5e9977e5cb6c248419057d894959f558ec3248744b6be5fb26063c3026d96f5b\": rpc error: code = NotFound desc = could not find container \"5e9977e5cb6c248419057d894959f558ec3248744b6be5fb26063c3026d96f5b\": container with ID starting with 5e9977e5cb6c248419057d894959f558ec3248744b6be5fb26063c3026d96f5b not found: ID does not exist" Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.757060 4718 scope.go:117] "RemoveContainer" containerID="c8208aba4cf3bd6dca4632c523d9da483cb23b86a073551b3a061168ef4194a2" Jan 23 16:42:02 crc kubenswrapper[4718]: E0123 16:42:02.757609 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8208aba4cf3bd6dca4632c523d9da483cb23b86a073551b3a061168ef4194a2\": container with ID starting with c8208aba4cf3bd6dca4632c523d9da483cb23b86a073551b3a061168ef4194a2 not found: ID does not exist" containerID="c8208aba4cf3bd6dca4632c523d9da483cb23b86a073551b3a061168ef4194a2" Jan 23 16:42:02 crc kubenswrapper[4718]: I0123 16:42:02.757661 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8208aba4cf3bd6dca4632c523d9da483cb23b86a073551b3a061168ef4194a2"} err="failed to get container status \"c8208aba4cf3bd6dca4632c523d9da483cb23b86a073551b3a061168ef4194a2\": rpc error: code = NotFound desc = could not find container \"c8208aba4cf3bd6dca4632c523d9da483cb23b86a073551b3a061168ef4194a2\": container with ID starting with c8208aba4cf3bd6dca4632c523d9da483cb23b86a073551b3a061168ef4194a2 not found: ID does not exist" Jan 23 16:42:03 crc kubenswrapper[4718]: I0123 16:42:03.165365 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5069ba53-2163-40eb-98d3-96c5724f5264" path="/var/lib/kubelet/pods/5069ba53-2163-40eb-98d3-96c5724f5264/volumes" Jan 23 16:42:05 crc kubenswrapper[4718]: I0123 16:42:05.619749 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.422746 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-q47zv"] Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.438747 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-q47zv"] Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.515850 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-5phqp"] Jan 23 16:42:17 crc kubenswrapper[4718]: E0123 16:42:17.516828 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5016935d-04f7-4a5c-a13d-365d574662e5" containerName="extract-content" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.516864 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5016935d-04f7-4a5c-a13d-365d574662e5" containerName="extract-content" Jan 23 16:42:17 crc kubenswrapper[4718]: E0123 16:42:17.516898 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5069ba53-2163-40eb-98d3-96c5724f5264" containerName="registry-server" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.516911 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5069ba53-2163-40eb-98d3-96c5724f5264" containerName="registry-server" Jan 23 16:42:17 crc kubenswrapper[4718]: E0123 16:42:17.516933 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5016935d-04f7-4a5c-a13d-365d574662e5" containerName="extract-utilities" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.516947 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5016935d-04f7-4a5c-a13d-365d574662e5" containerName="extract-utilities" Jan 23 16:42:17 crc kubenswrapper[4718]: E0123 16:42:17.516979 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5069ba53-2163-40eb-98d3-96c5724f5264" containerName="extract-utilities" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.516993 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5069ba53-2163-40eb-98d3-96c5724f5264" containerName="extract-utilities" Jan 23 16:42:17 crc kubenswrapper[4718]: E0123 16:42:17.517041 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5069ba53-2163-40eb-98d3-96c5724f5264" containerName="extract-content" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.517055 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5069ba53-2163-40eb-98d3-96c5724f5264" containerName="extract-content" Jan 23 16:42:17 crc kubenswrapper[4718]: E0123 16:42:17.517090 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5016935d-04f7-4a5c-a13d-365d574662e5" containerName="registry-server" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.517103 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5016935d-04f7-4a5c-a13d-365d574662e5" containerName="registry-server" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.517682 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="5069ba53-2163-40eb-98d3-96c5724f5264" containerName="registry-server" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.517741 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="5016935d-04f7-4a5c-a13d-365d574662e5" containerName="registry-server" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.519352 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-5phqp" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.528468 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-5phqp"] Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.679391 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3-config-data\") pod \"heat-db-sync-5phqp\" (UID: \"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3\") " pod="openstack/heat-db-sync-5phqp" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.679725 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhgxr\" (UniqueName: \"kubernetes.io/projected/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3-kube-api-access-bhgxr\") pod \"heat-db-sync-5phqp\" (UID: \"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3\") " pod="openstack/heat-db-sync-5phqp" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.680109 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3-combined-ca-bundle\") pod \"heat-db-sync-5phqp\" (UID: \"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3\") " pod="openstack/heat-db-sync-5phqp" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.782854 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3-combined-ca-bundle\") pod \"heat-db-sync-5phqp\" (UID: \"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3\") " pod="openstack/heat-db-sync-5phqp" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.782963 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3-config-data\") pod \"heat-db-sync-5phqp\" (UID: \"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3\") " pod="openstack/heat-db-sync-5phqp" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.782993 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhgxr\" (UniqueName: \"kubernetes.io/projected/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3-kube-api-access-bhgxr\") pod \"heat-db-sync-5phqp\" (UID: \"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3\") " pod="openstack/heat-db-sync-5phqp" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.795462 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3-combined-ca-bundle\") pod \"heat-db-sync-5phqp\" (UID: \"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3\") " pod="openstack/heat-db-sync-5phqp" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.795562 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3-config-data\") pod \"heat-db-sync-5phqp\" (UID: \"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3\") " pod="openstack/heat-db-sync-5phqp" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.799663 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhgxr\" (UniqueName: \"kubernetes.io/projected/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3-kube-api-access-bhgxr\") pod \"heat-db-sync-5phqp\" (UID: \"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3\") " pod="openstack/heat-db-sync-5phqp" Jan 23 16:42:17 crc kubenswrapper[4718]: I0123 16:42:17.839556 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-5phqp" Jan 23 16:42:18 crc kubenswrapper[4718]: I0123 16:42:18.420095 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-5phqp"] Jan 23 16:42:18 crc kubenswrapper[4718]: I0123 16:42:18.838331 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-5phqp" event={"ID":"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3","Type":"ContainerStarted","Data":"ee05f19c87aa80609dce61e5fed796b3943c6dd346ec5c982761eb566e979eed"} Jan 23 16:42:19 crc kubenswrapper[4718]: I0123 16:42:19.167862 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d57ff1-707c-4dd4-8922-1d910f52faf8" path="/var/lib/kubelet/pods/a3d57ff1-707c-4dd4-8922-1d910f52faf8/volumes" Jan 23 16:42:19 crc kubenswrapper[4718]: I0123 16:42:19.243476 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 23 16:42:20 crc kubenswrapper[4718]: I0123 16:42:20.190577 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:42:20 crc kubenswrapper[4718]: I0123 16:42:20.191150 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7e46965-9135-4846-9407-6eb50e290b89" containerName="ceilometer-central-agent" containerID="cri-o://6a929c4b61cb26cc2c16730b72ef926ed9208dbddfdb07808de3e1983b48d9bd" gracePeriod=30 Jan 23 16:42:20 crc kubenswrapper[4718]: I0123 16:42:20.191243 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7e46965-9135-4846-9407-6eb50e290b89" containerName="sg-core" containerID="cri-o://afd66fa255f3aef03ebeee22ea4f4930ce7a02e78f7b16a7a1503c383c258b94" gracePeriod=30 Jan 23 16:42:20 crc kubenswrapper[4718]: I0123 16:42:20.191257 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7e46965-9135-4846-9407-6eb50e290b89" containerName="ceilometer-notification-agent" containerID="cri-o://b923679cc3bd4ede4a4edb6dae93e256c47f0fec8e35787d37edd00d396bcb01" gracePeriod=30 Jan 23 16:42:20 crc kubenswrapper[4718]: I0123 16:42:20.191257 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7e46965-9135-4846-9407-6eb50e290b89" containerName="proxy-httpd" containerID="cri-o://63b4109f388b5be61f14c0d382b1efbd0699d59e492e3614b312978a94442cee" gracePeriod=30 Jan 23 16:42:20 crc kubenswrapper[4718]: I0123 16:42:20.414986 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 16:42:20 crc kubenswrapper[4718]: I0123 16:42:20.875246 4718 generic.go:334] "Generic (PLEG): container finished" podID="c7e46965-9135-4846-9407-6eb50e290b89" containerID="63b4109f388b5be61f14c0d382b1efbd0699d59e492e3614b312978a94442cee" exitCode=0 Jan 23 16:42:20 crc kubenswrapper[4718]: I0123 16:42:20.875280 4718 generic.go:334] "Generic (PLEG): container finished" podID="c7e46965-9135-4846-9407-6eb50e290b89" containerID="afd66fa255f3aef03ebeee22ea4f4930ce7a02e78f7b16a7a1503c383c258b94" exitCode=2 Jan 23 16:42:20 crc kubenswrapper[4718]: I0123 16:42:20.875288 4718 generic.go:334] "Generic (PLEG): container finished" podID="c7e46965-9135-4846-9407-6eb50e290b89" containerID="6a929c4b61cb26cc2c16730b72ef926ed9208dbddfdb07808de3e1983b48d9bd" exitCode=0 Jan 23 16:42:20 crc kubenswrapper[4718]: I0123 16:42:20.875310 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e46965-9135-4846-9407-6eb50e290b89","Type":"ContainerDied","Data":"63b4109f388b5be61f14c0d382b1efbd0699d59e492e3614b312978a94442cee"} Jan 23 16:42:20 crc kubenswrapper[4718]: I0123 16:42:20.875337 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e46965-9135-4846-9407-6eb50e290b89","Type":"ContainerDied","Data":"afd66fa255f3aef03ebeee22ea4f4930ce7a02e78f7b16a7a1503c383c258b94"} Jan 23 16:42:20 crc kubenswrapper[4718]: I0123 16:42:20.875347 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e46965-9135-4846-9407-6eb50e290b89","Type":"ContainerDied","Data":"6a929c4b61cb26cc2c16730b72ef926ed9208dbddfdb07808de3e1983b48d9bd"} Jan 23 16:42:23 crc kubenswrapper[4718]: I0123 16:42:23.993688 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="3b829751-ec51-4363-a796-fbf547cb8b6f" containerName="rabbitmq" containerID="cri-o://6c8509f2874fd88ca3b942fc341f4dfaf9f5147a532581171abba5681a692eb1" gracePeriod=604796 Jan 23 16:42:25 crc kubenswrapper[4718]: I0123 16:42:25.948811 4718 generic.go:334] "Generic (PLEG): container finished" podID="c7e46965-9135-4846-9407-6eb50e290b89" containerID="b923679cc3bd4ede4a4edb6dae93e256c47f0fec8e35787d37edd00d396bcb01" exitCode=0 Jan 23 16:42:25 crc kubenswrapper[4718]: I0123 16:42:25.949257 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e46965-9135-4846-9407-6eb50e290b89","Type":"ContainerDied","Data":"b923679cc3bd4ede4a4edb6dae93e256c47f0fec8e35787d37edd00d396bcb01"} Jan 23 16:42:25 crc kubenswrapper[4718]: I0123 16:42:25.993318 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f203e991-61a5-4809-bece-4d99f1e6b53a" containerName="rabbitmq" containerID="cri-o://7e7a1fe473e3c13076474935b70cb3e949509a253a0683c2e8aefea8d06a2fba" gracePeriod=604795 Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.152382 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.234179 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztl9s\" (UniqueName: \"kubernetes.io/projected/c7e46965-9135-4846-9407-6eb50e290b89-kube-api-access-ztl9s\") pod \"c7e46965-9135-4846-9407-6eb50e290b89\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.234331 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-config-data\") pod \"c7e46965-9135-4846-9407-6eb50e290b89\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.234495 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e46965-9135-4846-9407-6eb50e290b89-log-httpd\") pod \"c7e46965-9135-4846-9407-6eb50e290b89\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.234554 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-sg-core-conf-yaml\") pod \"c7e46965-9135-4846-9407-6eb50e290b89\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.234581 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-combined-ca-bundle\") pod \"c7e46965-9135-4846-9407-6eb50e290b89\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.234639 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-scripts\") pod \"c7e46965-9135-4846-9407-6eb50e290b89\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.234669 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-ceilometer-tls-certs\") pod \"c7e46965-9135-4846-9407-6eb50e290b89\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.234735 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e46965-9135-4846-9407-6eb50e290b89-run-httpd\") pod \"c7e46965-9135-4846-9407-6eb50e290b89\" (UID: \"c7e46965-9135-4846-9407-6eb50e290b89\") " Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.235055 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7e46965-9135-4846-9407-6eb50e290b89-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c7e46965-9135-4846-9407-6eb50e290b89" (UID: "c7e46965-9135-4846-9407-6eb50e290b89"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.236036 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e46965-9135-4846-9407-6eb50e290b89-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.236050 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7e46965-9135-4846-9407-6eb50e290b89-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c7e46965-9135-4846-9407-6eb50e290b89" (UID: "c7e46965-9135-4846-9407-6eb50e290b89"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.243267 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-scripts" (OuterVolumeSpecName: "scripts") pod "c7e46965-9135-4846-9407-6eb50e290b89" (UID: "c7e46965-9135-4846-9407-6eb50e290b89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.270839 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e46965-9135-4846-9407-6eb50e290b89-kube-api-access-ztl9s" (OuterVolumeSpecName: "kube-api-access-ztl9s") pod "c7e46965-9135-4846-9407-6eb50e290b89" (UID: "c7e46965-9135-4846-9407-6eb50e290b89"). InnerVolumeSpecName "kube-api-access-ztl9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.289004 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c7e46965-9135-4846-9407-6eb50e290b89" (UID: "c7e46965-9135-4846-9407-6eb50e290b89"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.316087 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c7e46965-9135-4846-9407-6eb50e290b89" (UID: "c7e46965-9135-4846-9407-6eb50e290b89"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.338921 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e46965-9135-4846-9407-6eb50e290b89-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.338962 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztl9s\" (UniqueName: \"kubernetes.io/projected/c7e46965-9135-4846-9407-6eb50e290b89-kube-api-access-ztl9s\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.338978 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.338990 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.339002 4718 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.348960 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7e46965-9135-4846-9407-6eb50e290b89" (UID: "c7e46965-9135-4846-9407-6eb50e290b89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.407589 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-config-data" (OuterVolumeSpecName: "config-data") pod "c7e46965-9135-4846-9407-6eb50e290b89" (UID: "c7e46965-9135-4846-9407-6eb50e290b89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.441421 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.441452 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e46965-9135-4846-9407-6eb50e290b89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.985058 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e46965-9135-4846-9407-6eb50e290b89","Type":"ContainerDied","Data":"01a38d0bb5f1c8c5c1017e1b95efbbdccb800ecf49894d2a4bf62581196d1302"} Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.985545 4718 scope.go:117] "RemoveContainer" containerID="63b4109f388b5be61f14c0d382b1efbd0699d59e492e3614b312978a94442cee" Jan 23 16:42:26 crc kubenswrapper[4718]: I0123 16:42:26.985338 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.047210 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.049588 4718 scope.go:117] "RemoveContainer" containerID="afd66fa255f3aef03ebeee22ea4f4930ce7a02e78f7b16a7a1503c383c258b94" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.073424 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.094377 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:42:27 crc kubenswrapper[4718]: E0123 16:42:27.094997 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e46965-9135-4846-9407-6eb50e290b89" containerName="proxy-httpd" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.095021 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e46965-9135-4846-9407-6eb50e290b89" containerName="proxy-httpd" Jan 23 16:42:27 crc kubenswrapper[4718]: E0123 16:42:27.095046 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e46965-9135-4846-9407-6eb50e290b89" containerName="ceilometer-notification-agent" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.095052 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e46965-9135-4846-9407-6eb50e290b89" containerName="ceilometer-notification-agent" Jan 23 16:42:27 crc kubenswrapper[4718]: E0123 16:42:27.095097 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e46965-9135-4846-9407-6eb50e290b89" containerName="ceilometer-central-agent" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.095104 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e46965-9135-4846-9407-6eb50e290b89" containerName="ceilometer-central-agent" Jan 23 16:42:27 crc kubenswrapper[4718]: E0123 16:42:27.095120 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e46965-9135-4846-9407-6eb50e290b89" containerName="sg-core" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.095125 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e46965-9135-4846-9407-6eb50e290b89" containerName="sg-core" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.095514 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e46965-9135-4846-9407-6eb50e290b89" containerName="ceilometer-central-agent" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.095537 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e46965-9135-4846-9407-6eb50e290b89" containerName="sg-core" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.095621 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e46965-9135-4846-9407-6eb50e290b89" containerName="proxy-httpd" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.095651 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e46965-9135-4846-9407-6eb50e290b89" containerName="ceilometer-notification-agent" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.097978 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.098292 4718 scope.go:117] "RemoveContainer" containerID="b923679cc3bd4ede4a4edb6dae93e256c47f0fec8e35787d37edd00d396bcb01" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.099919 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.100088 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.103763 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.109846 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.141037 4718 scope.go:117] "RemoveContainer" containerID="6a929c4b61cb26cc2c16730b72ef926ed9208dbddfdb07808de3e1983b48d9bd" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.161829 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e46965-9135-4846-9407-6eb50e290b89" path="/var/lib/kubelet/pods/c7e46965-9135-4846-9407-6eb50e290b89/volumes" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.180110 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/764a9a7e-61b2-4513-8f87-fc357857c90f-run-httpd\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.180254 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/764a9a7e-61b2-4513-8f87-fc357857c90f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.180322 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/764a9a7e-61b2-4513-8f87-fc357857c90f-config-data\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.180346 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/764a9a7e-61b2-4513-8f87-fc357857c90f-log-httpd\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.180369 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/764a9a7e-61b2-4513-8f87-fc357857c90f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.180404 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5rv9\" (UniqueName: \"kubernetes.io/projected/764a9a7e-61b2-4513-8f87-fc357857c90f-kube-api-access-v5rv9\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.180452 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/764a9a7e-61b2-4513-8f87-fc357857c90f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.180509 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/764a9a7e-61b2-4513-8f87-fc357857c90f-scripts\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.283007 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5rv9\" (UniqueName: \"kubernetes.io/projected/764a9a7e-61b2-4513-8f87-fc357857c90f-kube-api-access-v5rv9\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.283083 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/764a9a7e-61b2-4513-8f87-fc357857c90f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.283156 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/764a9a7e-61b2-4513-8f87-fc357857c90f-scripts\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.283245 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/764a9a7e-61b2-4513-8f87-fc357857c90f-run-httpd\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.283295 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/764a9a7e-61b2-4513-8f87-fc357857c90f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.283347 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/764a9a7e-61b2-4513-8f87-fc357857c90f-config-data\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.283370 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/764a9a7e-61b2-4513-8f87-fc357857c90f-log-httpd\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.283392 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/764a9a7e-61b2-4513-8f87-fc357857c90f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.283819 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/764a9a7e-61b2-4513-8f87-fc357857c90f-run-httpd\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.284383 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/764a9a7e-61b2-4513-8f87-fc357857c90f-log-httpd\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.288968 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/764a9a7e-61b2-4513-8f87-fc357857c90f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.290396 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/764a9a7e-61b2-4513-8f87-fc357857c90f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.291684 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/764a9a7e-61b2-4513-8f87-fc357857c90f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.295938 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/764a9a7e-61b2-4513-8f87-fc357857c90f-scripts\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.298584 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/764a9a7e-61b2-4513-8f87-fc357857c90f-config-data\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.303226 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5rv9\" (UniqueName: \"kubernetes.io/projected/764a9a7e-61b2-4513-8f87-fc357857c90f-kube-api-access-v5rv9\") pod \"ceilometer-0\" (UID: \"764a9a7e-61b2-4513-8f87-fc357857c90f\") " pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.423726 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 16:42:27 crc kubenswrapper[4718]: I0123 16:42:27.935509 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 16:42:28 crc kubenswrapper[4718]: I0123 16:42:28.002875 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"764a9a7e-61b2-4513-8f87-fc357857c90f","Type":"ContainerStarted","Data":"7066bac18a2bb7b0dad6da24ce212fa5a7e658e146f8558098d3ab35a60f170b"} Jan 23 16:42:30 crc kubenswrapper[4718]: I0123 16:42:30.987770 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="3b829751-ec51-4363-a796-fbf547cb8b6f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Jan 23 16:42:31 crc kubenswrapper[4718]: I0123 16:42:31.060533 4718 generic.go:334] "Generic (PLEG): container finished" podID="3b829751-ec51-4363-a796-fbf547cb8b6f" containerID="6c8509f2874fd88ca3b942fc341f4dfaf9f5147a532581171abba5681a692eb1" exitCode=0 Jan 23 16:42:31 crc kubenswrapper[4718]: I0123 16:42:31.060576 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3b829751-ec51-4363-a796-fbf547cb8b6f","Type":"ContainerDied","Data":"6c8509f2874fd88ca3b942fc341f4dfaf9f5147a532581171abba5681a692eb1"} Jan 23 16:42:31 crc kubenswrapper[4718]: I0123 16:42:31.107946 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f203e991-61a5-4809-bece-4d99f1e6b53a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Jan 23 16:42:33 crc kubenswrapper[4718]: I0123 16:42:33.088445 4718 generic.go:334] "Generic (PLEG): container finished" podID="f203e991-61a5-4809-bece-4d99f1e6b53a" containerID="7e7a1fe473e3c13076474935b70cb3e949509a253a0683c2e8aefea8d06a2fba" exitCode=0 Jan 23 16:42:33 crc kubenswrapper[4718]: I0123 16:42:33.088778 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f203e991-61a5-4809-bece-4d99f1e6b53a","Type":"ContainerDied","Data":"7e7a1fe473e3c13076474935b70cb3e949509a253a0683c2e8aefea8d06a2fba"} Jan 23 16:42:33 crc kubenswrapper[4718]: I0123 16:42:33.824950 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-lv7pq"] Jan 23 16:42:33 crc kubenswrapper[4718]: I0123 16:42:33.838568 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:33 crc kubenswrapper[4718]: I0123 16:42:33.893345 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 23 16:42:33 crc kubenswrapper[4718]: I0123 16:42:33.901016 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:33 crc kubenswrapper[4718]: I0123 16:42:33.901282 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rnsb\" (UniqueName: \"kubernetes.io/projected/b337e576-c57f-4f60-b480-2b411c2c22f7-kube-api-access-5rnsb\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:33 crc kubenswrapper[4718]: I0123 16:42:33.901329 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:33 crc kubenswrapper[4718]: I0123 16:42:33.901456 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:33 crc kubenswrapper[4718]: I0123 16:42:33.901536 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:33 crc kubenswrapper[4718]: I0123 16:42:33.902776 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-config\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:33 crc kubenswrapper[4718]: I0123 16:42:33.902861 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:33 crc kubenswrapper[4718]: I0123 16:42:33.912488 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-lv7pq"] Jan 23 16:42:34 crc kubenswrapper[4718]: I0123 16:42:34.006406 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-config\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:34 crc kubenswrapper[4718]: I0123 16:42:34.006477 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:34 crc kubenswrapper[4718]: I0123 16:42:34.006595 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:34 crc kubenswrapper[4718]: I0123 16:42:34.006683 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rnsb\" (UniqueName: \"kubernetes.io/projected/b337e576-c57f-4f60-b480-2b411c2c22f7-kube-api-access-5rnsb\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:34 crc kubenswrapper[4718]: I0123 16:42:34.006704 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:34 crc kubenswrapper[4718]: I0123 16:42:34.006744 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:34 crc kubenswrapper[4718]: I0123 16:42:34.006774 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:34 crc kubenswrapper[4718]: I0123 16:42:34.007462 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-config\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:34 crc kubenswrapper[4718]: I0123 16:42:34.007481 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:34 crc kubenswrapper[4718]: I0123 16:42:34.007557 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:34 crc kubenswrapper[4718]: I0123 16:42:34.008503 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:34 crc kubenswrapper[4718]: I0123 16:42:34.008816 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:34 crc kubenswrapper[4718]: I0123 16:42:34.009709 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:34 crc kubenswrapper[4718]: I0123 16:42:34.049934 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rnsb\" (UniqueName: \"kubernetes.io/projected/b337e576-c57f-4f60-b480-2b411c2c22f7-kube-api-access-5rnsb\") pod \"dnsmasq-dns-5b75489c6f-lv7pq\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:34 crc kubenswrapper[4718]: I0123 16:42:34.217554 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.050555 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.063146 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.211820 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\") pod \"3b829751-ec51-4363-a796-fbf547cb8b6f\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.211905 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-erlang-cookie\") pod \"f203e991-61a5-4809-bece-4d99f1e6b53a\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.211936 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-plugins\") pod \"3b829751-ec51-4363-a796-fbf547cb8b6f\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.211982 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-confd\") pod \"3b829751-ec51-4363-a796-fbf547cb8b6f\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.212016 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-tls\") pod \"3b829751-ec51-4363-a796-fbf547cb8b6f\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.212054 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b829751-ec51-4363-a796-fbf547cb8b6f-pod-info\") pod \"3b829751-ec51-4363-a796-fbf547cb8b6f\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.212096 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-plugins\") pod \"f203e991-61a5-4809-bece-4d99f1e6b53a\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.212157 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-confd\") pod \"f203e991-61a5-4809-bece-4d99f1e6b53a\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.212211 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-tls\") pod \"f203e991-61a5-4809-bece-4d99f1e6b53a\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.212282 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f203e991-61a5-4809-bece-4d99f1e6b53a-plugins-conf\") pod \"f203e991-61a5-4809-bece-4d99f1e6b53a\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.212323 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f203e991-61a5-4809-bece-4d99f1e6b53a-server-conf\") pod \"f203e991-61a5-4809-bece-4d99f1e6b53a\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.212378 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-erlang-cookie\") pod \"3b829751-ec51-4363-a796-fbf547cb8b6f\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.212406 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f203e991-61a5-4809-bece-4d99f1e6b53a-erlang-cookie-secret\") pod \"f203e991-61a5-4809-bece-4d99f1e6b53a\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.212432 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b829751-ec51-4363-a796-fbf547cb8b6f-server-conf\") pod \"3b829751-ec51-4363-a796-fbf547cb8b6f\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.212952 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\") pod \"f203e991-61a5-4809-bece-4d99f1e6b53a\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.213049 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b829751-ec51-4363-a796-fbf547cb8b6f-erlang-cookie-secret\") pod \"3b829751-ec51-4363-a796-fbf547cb8b6f\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.213089 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b829751-ec51-4363-a796-fbf547cb8b6f-plugins-conf\") pod \"3b829751-ec51-4363-a796-fbf547cb8b6f\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.213144 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f203e991-61a5-4809-bece-4d99f1e6b53a-config-data\") pod \"f203e991-61a5-4809-bece-4d99f1e6b53a\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.213173 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f203e991-61a5-4809-bece-4d99f1e6b53a-pod-info\") pod \"f203e991-61a5-4809-bece-4d99f1e6b53a\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.213212 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2h9b\" (UniqueName: \"kubernetes.io/projected/f203e991-61a5-4809-bece-4d99f1e6b53a-kube-api-access-w2h9b\") pod \"f203e991-61a5-4809-bece-4d99f1e6b53a\" (UID: \"f203e991-61a5-4809-bece-4d99f1e6b53a\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.213257 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvnpl\" (UniqueName: \"kubernetes.io/projected/3b829751-ec51-4363-a796-fbf547cb8b6f-kube-api-access-nvnpl\") pod \"3b829751-ec51-4363-a796-fbf547cb8b6f\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.213282 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b829751-ec51-4363-a796-fbf547cb8b6f-config-data\") pod \"3b829751-ec51-4363-a796-fbf547cb8b6f\" (UID: \"3b829751-ec51-4363-a796-fbf547cb8b6f\") " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.214208 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3b829751-ec51-4363-a796-fbf547cb8b6f" (UID: "3b829751-ec51-4363-a796-fbf547cb8b6f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.216125 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3b829751-ec51-4363-a796-fbf547cb8b6f" (UID: "3b829751-ec51-4363-a796-fbf547cb8b6f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.221170 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f203e991-61a5-4809-bece-4d99f1e6b53a" (UID: "f203e991-61a5-4809-bece-4d99f1e6b53a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.221255 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f203e991-61a5-4809-bece-4d99f1e6b53a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f203e991-61a5-4809-bece-4d99f1e6b53a" (UID: "f203e991-61a5-4809-bece-4d99f1e6b53a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.222893 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b829751-ec51-4363-a796-fbf547cb8b6f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3b829751-ec51-4363-a796-fbf547cb8b6f" (UID: "3b829751-ec51-4363-a796-fbf547cb8b6f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.223325 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f203e991-61a5-4809-bece-4d99f1e6b53a" (UID: "f203e991-61a5-4809-bece-4d99f1e6b53a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.229109 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f203e991-61a5-4809-bece-4d99f1e6b53a-pod-info" (OuterVolumeSpecName: "pod-info") pod "f203e991-61a5-4809-bece-4d99f1e6b53a" (UID: "f203e991-61a5-4809-bece-4d99f1e6b53a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.229292 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f203e991-61a5-4809-bece-4d99f1e6b53a" (UID: "f203e991-61a5-4809-bece-4d99f1e6b53a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.231193 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3b829751-ec51-4363-a796-fbf547cb8b6f" (UID: "3b829751-ec51-4363-a796-fbf547cb8b6f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.233751 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b829751-ec51-4363-a796-fbf547cb8b6f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3b829751-ec51-4363-a796-fbf547cb8b6f" (UID: "3b829751-ec51-4363-a796-fbf547cb8b6f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.234008 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3b829751-ec51-4363-a796-fbf547cb8b6f-pod-info" (OuterVolumeSpecName: "pod-info") pod "3b829751-ec51-4363-a796-fbf547cb8b6f" (UID: "3b829751-ec51-4363-a796-fbf547cb8b6f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.234246 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f203e991-61a5-4809-bece-4d99f1e6b53a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f203e991-61a5-4809-bece-4d99f1e6b53a" (UID: "f203e991-61a5-4809-bece-4d99f1e6b53a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.234882 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f203e991-61a5-4809-bece-4d99f1e6b53a-kube-api-access-w2h9b" (OuterVolumeSpecName: "kube-api-access-w2h9b") pod "f203e991-61a5-4809-bece-4d99f1e6b53a" (UID: "f203e991-61a5-4809-bece-4d99f1e6b53a"). InnerVolumeSpecName "kube-api-access-w2h9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.246735 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b829751-ec51-4363-a796-fbf547cb8b6f-kube-api-access-nvnpl" (OuterVolumeSpecName: "kube-api-access-nvnpl") pod "3b829751-ec51-4363-a796-fbf547cb8b6f" (UID: "3b829751-ec51-4363-a796-fbf547cb8b6f"). InnerVolumeSpecName "kube-api-access-nvnpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.281010 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f203e991-61a5-4809-bece-4d99f1e6b53a","Type":"ContainerDied","Data":"8ebf0b0dc15c30e22fca24e739ca6e9a352fdf885d26ba25c891aa2e9eed91d9"} Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.281062 4718 scope.go:117] "RemoveContainer" containerID="7e7a1fe473e3c13076474935b70cb3e949509a253a0683c2e8aefea8d06a2fba" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.281220 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.282740 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e" (OuterVolumeSpecName: "persistence") pod "f203e991-61a5-4809-bece-4d99f1e6b53a" (UID: "f203e991-61a5-4809-bece-4d99f1e6b53a"). InnerVolumeSpecName "pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.283276 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21" (OuterVolumeSpecName: "persistence") pod "3b829751-ec51-4363-a796-fbf547cb8b6f" (UID: "3b829751-ec51-4363-a796-fbf547cb8b6f"). InnerVolumeSpecName "pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.295584 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3b829751-ec51-4363-a796-fbf547cb8b6f","Type":"ContainerDied","Data":"624a55f51dc269b8bffd32c814f9b8906a3d99860daa9237afe49da6ca09c0f0"} Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.295723 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.300593 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b829751-ec51-4363-a796-fbf547cb8b6f-config-data" (OuterVolumeSpecName: "config-data") pod "3b829751-ec51-4363-a796-fbf547cb8b6f" (UID: "3b829751-ec51-4363-a796-fbf547cb8b6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.316673 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.317201 4718 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f203e991-61a5-4809-bece-4d99f1e6b53a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.317273 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.317332 4718 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f203e991-61a5-4809-bece-4d99f1e6b53a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.317405 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\") on node \"crc\" " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.317470 4718 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b829751-ec51-4363-a796-fbf547cb8b6f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.317732 4718 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b829751-ec51-4363-a796-fbf547cb8b6f-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.317802 4718 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f203e991-61a5-4809-bece-4d99f1e6b53a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.317863 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2h9b\" (UniqueName: \"kubernetes.io/projected/f203e991-61a5-4809-bece-4d99f1e6b53a-kube-api-access-w2h9b\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.317930 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvnpl\" (UniqueName: \"kubernetes.io/projected/3b829751-ec51-4363-a796-fbf547cb8b6f-kube-api-access-nvnpl\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.317995 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b829751-ec51-4363-a796-fbf547cb8b6f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.318110 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\") on node \"crc\" " Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.318202 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.318499 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.318563 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.318621 4718 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b829751-ec51-4363-a796-fbf547cb8b6f-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.318807 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.332843 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f203e991-61a5-4809-bece-4d99f1e6b53a-config-data" (OuterVolumeSpecName: "config-data") pod "f203e991-61a5-4809-bece-4d99f1e6b53a" (UID: "f203e991-61a5-4809-bece-4d99f1e6b53a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.335768 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f203e991-61a5-4809-bece-4d99f1e6b53a-server-conf" (OuterVolumeSpecName: "server-conf") pod "f203e991-61a5-4809-bece-4d99f1e6b53a" (UID: "f203e991-61a5-4809-bece-4d99f1e6b53a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.365355 4718 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.365527 4718 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21") on node "crc" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.386337 4718 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.386507 4718 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e") on node "crc" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.401620 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b829751-ec51-4363-a796-fbf547cb8b6f-server-conf" (OuterVolumeSpecName: "server-conf") pod "3b829751-ec51-4363-a796-fbf547cb8b6f" (UID: "3b829751-ec51-4363-a796-fbf547cb8b6f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.412414 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3b829751-ec51-4363-a796-fbf547cb8b6f" (UID: "3b829751-ec51-4363-a796-fbf547cb8b6f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.421246 4718 reconciler_common.go:293] "Volume detached for volume \"pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.421278 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b829751-ec51-4363-a796-fbf547cb8b6f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.421288 4718 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f203e991-61a5-4809-bece-4d99f1e6b53a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.421298 4718 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b829751-ec51-4363-a796-fbf547cb8b6f-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.421306 4718 reconciler_common.go:293] "Volume detached for volume \"pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.421318 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f203e991-61a5-4809-bece-4d99f1e6b53a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.441764 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f203e991-61a5-4809-bece-4d99f1e6b53a" (UID: "f203e991-61a5-4809-bece-4d99f1e6b53a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.524498 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f203e991-61a5-4809-bece-4d99f1e6b53a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:42 crc kubenswrapper[4718]: E0123 16:42:42.626881 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 23 16:42:42 crc kubenswrapper[4718]: E0123 16:42:42.626934 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 23 16:42:42 crc kubenswrapper[4718]: E0123 16:42:42.627058 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhgxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-5phqp_openstack(0ab31210-204e-4f0c-9aa7-ef99dc7db5c3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:42:42 crc kubenswrapper[4718]: E0123 16:42:42.628814 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-5phqp" podUID="0ab31210-204e-4f0c-9aa7-ef99dc7db5c3" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.648318 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.657416 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.679685 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.694818 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.713275 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 16:42:42 crc kubenswrapper[4718]: E0123 16:42:42.714124 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f203e991-61a5-4809-bece-4d99f1e6b53a" containerName="setup-container" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.714145 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f203e991-61a5-4809-bece-4d99f1e6b53a" containerName="setup-container" Jan 23 16:42:42 crc kubenswrapper[4718]: E0123 16:42:42.714176 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b829751-ec51-4363-a796-fbf547cb8b6f" containerName="rabbitmq" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.714183 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b829751-ec51-4363-a796-fbf547cb8b6f" containerName="rabbitmq" Jan 23 16:42:42 crc kubenswrapper[4718]: E0123 16:42:42.714210 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f203e991-61a5-4809-bece-4d99f1e6b53a" containerName="rabbitmq" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.714218 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f203e991-61a5-4809-bece-4d99f1e6b53a" containerName="rabbitmq" Jan 23 16:42:42 crc kubenswrapper[4718]: E0123 16:42:42.714228 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b829751-ec51-4363-a796-fbf547cb8b6f" containerName="setup-container" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.714234 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b829751-ec51-4363-a796-fbf547cb8b6f" containerName="setup-container" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.714473 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b829751-ec51-4363-a796-fbf547cb8b6f" containerName="rabbitmq" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.714497 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f203e991-61a5-4809-bece-4d99f1e6b53a" containerName="rabbitmq" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.716078 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.719563 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.719609 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.719673 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-t559l" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.719851 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.719880 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.719952 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.720032 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.730529 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.751744 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.754230 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.771913 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.840963 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d346ed1b-38d4-4c87-82f6-78ec3880c670-pod-info\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841022 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bca04db1-8e77-405e-b8ef-656cf882136c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841050 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d346ed1b-38d4-4c87-82f6-78ec3880c670-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841073 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bca04db1-8e77-405e-b8ef-656cf882136c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841093 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841109 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d346ed1b-38d4-4c87-82f6-78ec3880c670-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841156 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bca04db1-8e77-405e-b8ef-656cf882136c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841213 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bca04db1-8e77-405e-b8ef-656cf882136c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841239 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d346ed1b-38d4-4c87-82f6-78ec3880c670-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841264 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bca04db1-8e77-405e-b8ef-656cf882136c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841284 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d346ed1b-38d4-4c87-82f6-78ec3880c670-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841306 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d346ed1b-38d4-4c87-82f6-78ec3880c670-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841333 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d346ed1b-38d4-4c87-82f6-78ec3880c670-config-data\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841350 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bca04db1-8e77-405e-b8ef-656cf882136c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841367 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d346ed1b-38d4-4c87-82f6-78ec3880c670-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841382 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d346ed1b-38d4-4c87-82f6-78ec3880c670-server-conf\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841404 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bca04db1-8e77-405e-b8ef-656cf882136c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841422 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bca04db1-8e77-405e-b8ef-656cf882136c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841447 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7psfm\" (UniqueName: \"kubernetes.io/projected/d346ed1b-38d4-4c87-82f6-78ec3880c670-kube-api-access-7psfm\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841516 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841539 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn8hj\" (UniqueName: \"kubernetes.io/projected/bca04db1-8e77-405e-b8ef-656cf882136c-kube-api-access-gn8hj\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.841561 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bca04db1-8e77-405e-b8ef-656cf882136c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.945663 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.945721 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn8hj\" (UniqueName: \"kubernetes.io/projected/bca04db1-8e77-405e-b8ef-656cf882136c-kube-api-access-gn8hj\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.945757 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bca04db1-8e77-405e-b8ef-656cf882136c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.945781 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d346ed1b-38d4-4c87-82f6-78ec3880c670-pod-info\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.945805 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bca04db1-8e77-405e-b8ef-656cf882136c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.945831 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d346ed1b-38d4-4c87-82f6-78ec3880c670-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.945855 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bca04db1-8e77-405e-b8ef-656cf882136c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.945887 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.945910 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d346ed1b-38d4-4c87-82f6-78ec3880c670-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.945977 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bca04db1-8e77-405e-b8ef-656cf882136c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.946082 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bca04db1-8e77-405e-b8ef-656cf882136c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.946119 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d346ed1b-38d4-4c87-82f6-78ec3880c670-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.946155 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bca04db1-8e77-405e-b8ef-656cf882136c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.946181 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d346ed1b-38d4-4c87-82f6-78ec3880c670-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.946214 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d346ed1b-38d4-4c87-82f6-78ec3880c670-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.946256 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d346ed1b-38d4-4c87-82f6-78ec3880c670-config-data\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.946278 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bca04db1-8e77-405e-b8ef-656cf882136c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.946301 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d346ed1b-38d4-4c87-82f6-78ec3880c670-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.946319 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d346ed1b-38d4-4c87-82f6-78ec3880c670-server-conf\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.946346 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bca04db1-8e77-405e-b8ef-656cf882136c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.946365 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bca04db1-8e77-405e-b8ef-656cf882136c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.946400 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7psfm\" (UniqueName: \"kubernetes.io/projected/d346ed1b-38d4-4c87-82f6-78ec3880c670-kube-api-access-7psfm\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.947709 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bca04db1-8e77-405e-b8ef-656cf882136c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.949286 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d346ed1b-38d4-4c87-82f6-78ec3880c670-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.949576 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d346ed1b-38d4-4c87-82f6-78ec3880c670-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.950148 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bca04db1-8e77-405e-b8ef-656cf882136c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.950383 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bca04db1-8e77-405e-b8ef-656cf882136c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.951437 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bca04db1-8e77-405e-b8ef-656cf882136c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.951854 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d346ed1b-38d4-4c87-82f6-78ec3880c670-config-data\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.952396 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d346ed1b-38d4-4c87-82f6-78ec3880c670-server-conf\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.952772 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bca04db1-8e77-405e-b8ef-656cf882136c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.953202 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d346ed1b-38d4-4c87-82f6-78ec3880c670-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.953351 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.953378 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4a447bcf10e45026334e95e1687623774169b3df4db8276f7228b00b19c67e0a/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.957353 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d346ed1b-38d4-4c87-82f6-78ec3880c670-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.958235 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d346ed1b-38d4-4c87-82f6-78ec3880c670-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.959926 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d346ed1b-38d4-4c87-82f6-78ec3880c670-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.959976 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bca04db1-8e77-405e-b8ef-656cf882136c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.960227 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d346ed1b-38d4-4c87-82f6-78ec3880c670-pod-info\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.960261 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bca04db1-8e77-405e-b8ef-656cf882136c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.961060 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.961096 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/912fab9ca85c0800faa14298d65b1b37576d7b5c16478735c0355609b4c85a2e/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.961470 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bca04db1-8e77-405e-b8ef-656cf882136c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.968892 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bca04db1-8e77-405e-b8ef-656cf882136c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.968949 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7psfm\" (UniqueName: \"kubernetes.io/projected/d346ed1b-38d4-4c87-82f6-78ec3880c670-kube-api-access-7psfm\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:42 crc kubenswrapper[4718]: I0123 16:42:42.969099 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn8hj\" (UniqueName: \"kubernetes.io/projected/bca04db1-8e77-405e-b8ef-656cf882136c-kube-api-access-gn8hj\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:43 crc kubenswrapper[4718]: I0123 16:42:43.024836 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fd6b2a9-80ce-46f7-84ae-8bfbee145e2e\") pod \"rabbitmq-cell1-server-0\" (UID: \"bca04db1-8e77-405e-b8ef-656cf882136c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:43 crc kubenswrapper[4718]: I0123 16:42:43.028789 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a134a6f-6d4c-4339-94a3-34d8647b2a21\") pod \"rabbitmq-server-2\" (UID: \"d346ed1b-38d4-4c87-82f6-78ec3880c670\") " pod="openstack/rabbitmq-server-2" Jan 23 16:42:43 crc kubenswrapper[4718]: I0123 16:42:43.039937 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:42:43 crc kubenswrapper[4718]: I0123 16:42:43.078356 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 23 16:42:43 crc kubenswrapper[4718]: I0123 16:42:43.160379 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b829751-ec51-4363-a796-fbf547cb8b6f" path="/var/lib/kubelet/pods/3b829751-ec51-4363-a796-fbf547cb8b6f/volumes" Jan 23 16:42:43 crc kubenswrapper[4718]: I0123 16:42:43.162002 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f203e991-61a5-4809-bece-4d99f1e6b53a" path="/var/lib/kubelet/pods/f203e991-61a5-4809-bece-4d99f1e6b53a/volumes" Jan 23 16:42:43 crc kubenswrapper[4718]: E0123 16:42:43.318852 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-5phqp" podUID="0ab31210-204e-4f0c-9aa7-ef99dc7db5c3" Jan 23 16:42:45 crc kubenswrapper[4718]: I0123 16:42:45.987084 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="3b829751-ec51-4363-a796-fbf547cb8b6f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: i/o timeout" Jan 23 16:42:46 crc kubenswrapper[4718]: I0123 16:42:46.107745 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f203e991-61a5-4809-bece-4d99f1e6b53a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: i/o timeout" Jan 23 16:42:46 crc kubenswrapper[4718]: I0123 16:42:46.147355 4718 scope.go:117] "RemoveContainer" containerID="57a7d3e23a91f61a08f1235702ca3507ea01f14b822f4e1690cf5b7da6226024" Jan 23 16:42:46 crc kubenswrapper[4718]: E0123 16:42:46.163396 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Jan 23 16:42:46 crc kubenswrapper[4718]: E0123 16:42:46.163481 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Jan 23 16:42:46 crc kubenswrapper[4718]: E0123 16:42:46.163719 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n677h8bh689h648h587h67ch675h5d6h8hc9hd7hb6hch59bh95h57bh9ch6ch9fh58dh96h54ch5c7hbhb5h5c5h676h674hbfh5b8hd9h74q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v5rv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(764a9a7e-61b2-4513-8f87-fc357857c90f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 16:42:46 crc kubenswrapper[4718]: I0123 16:42:46.381432 4718 scope.go:117] "RemoveContainer" containerID="6c8509f2874fd88ca3b942fc341f4dfaf9f5147a532581171abba5681a692eb1" Jan 23 16:42:46 crc kubenswrapper[4718]: I0123 16:42:46.439152 4718 scope.go:117] "RemoveContainer" containerID="4ee5fb187d594a7dd22f54bcafa87a4a75cd20fc67e027fbd78b5f35074b51db" Jan 23 16:42:46 crc kubenswrapper[4718]: I0123 16:42:46.634968 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-lv7pq"] Jan 23 16:42:46 crc kubenswrapper[4718]: W0123 16:42:46.649108 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb337e576_c57f_4f60_b480_2b411c2c22f7.slice/crio-e087805037442997a0250516c5766a5f5cd97b8871b293864ab44153eb991fd4 WatchSource:0}: Error finding container e087805037442997a0250516c5766a5f5cd97b8871b293864ab44153eb991fd4: Status 404 returned error can't find the container with id e087805037442997a0250516c5766a5f5cd97b8871b293864ab44153eb991fd4 Jan 23 16:42:46 crc kubenswrapper[4718]: I0123 16:42:46.745206 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 23 16:42:46 crc kubenswrapper[4718]: W0123 16:42:46.751970 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd346ed1b_38d4_4c87_82f6_78ec3880c670.slice/crio-32e417463969b3c0a0247c735667ee6f1596673283dd9dd516712f2d8c8dc178 WatchSource:0}: Error finding container 32e417463969b3c0a0247c735667ee6f1596673283dd9dd516712f2d8c8dc178: Status 404 returned error can't find the container with id 32e417463969b3c0a0247c735667ee6f1596673283dd9dd516712f2d8c8dc178 Jan 23 16:42:46 crc kubenswrapper[4718]: I0123 16:42:46.829295 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 16:42:46 crc kubenswrapper[4718]: W0123 16:42:46.841763 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbca04db1_8e77_405e_b8ef_656cf882136c.slice/crio-6ab144689e0063132ebe11ce73f2d6d5da9614a84e049d53d88d557bb8e34c32 WatchSource:0}: Error finding container 6ab144689e0063132ebe11ce73f2d6d5da9614a84e049d53d88d557bb8e34c32: Status 404 returned error can't find the container with id 6ab144689e0063132ebe11ce73f2d6d5da9614a84e049d53d88d557bb8e34c32 Jan 23 16:42:47 crc kubenswrapper[4718]: I0123 16:42:47.379391 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bca04db1-8e77-405e-b8ef-656cf882136c","Type":"ContainerStarted","Data":"6ab144689e0063132ebe11ce73f2d6d5da9614a84e049d53d88d557bb8e34c32"} Jan 23 16:42:47 crc kubenswrapper[4718]: I0123 16:42:47.387168 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"d346ed1b-38d4-4c87-82f6-78ec3880c670","Type":"ContainerStarted","Data":"32e417463969b3c0a0247c735667ee6f1596673283dd9dd516712f2d8c8dc178"} Jan 23 16:42:47 crc kubenswrapper[4718]: I0123 16:42:47.395195 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"764a9a7e-61b2-4513-8f87-fc357857c90f","Type":"ContainerStarted","Data":"26eab2698954260645f625385b8409ca838eca073fc587b882808c01d3c3eac7"} Jan 23 16:42:47 crc kubenswrapper[4718]: I0123 16:42:47.399782 4718 generic.go:334] "Generic (PLEG): container finished" podID="b337e576-c57f-4f60-b480-2b411c2c22f7" containerID="b42a511ba18eec82ea1101b9f5fff79cf2e249a06a1b03c94c9fc5c6f5681ff5" exitCode=0 Jan 23 16:42:47 crc kubenswrapper[4718]: I0123 16:42:47.399819 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" event={"ID":"b337e576-c57f-4f60-b480-2b411c2c22f7","Type":"ContainerDied","Data":"b42a511ba18eec82ea1101b9f5fff79cf2e249a06a1b03c94c9fc5c6f5681ff5"} Jan 23 16:42:47 crc kubenswrapper[4718]: I0123 16:42:47.399843 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" event={"ID":"b337e576-c57f-4f60-b480-2b411c2c22f7","Type":"ContainerStarted","Data":"e087805037442997a0250516c5766a5f5cd97b8871b293864ab44153eb991fd4"} Jan 23 16:42:48 crc kubenswrapper[4718]: I0123 16:42:48.412460 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"764a9a7e-61b2-4513-8f87-fc357857c90f","Type":"ContainerStarted","Data":"0070d62e8524636ec885eebf5787f78f20216afa3f52cae388e4fa6f3a012eda"} Jan 23 16:42:48 crc kubenswrapper[4718]: I0123 16:42:48.415051 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" event={"ID":"b337e576-c57f-4f60-b480-2b411c2c22f7","Type":"ContainerStarted","Data":"205188f4811df5d00592a5448a142ea3d242bbc2a9c44ab882800450f9333b0e"} Jan 23 16:42:48 crc kubenswrapper[4718]: I0123 16:42:48.415210 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:48 crc kubenswrapper[4718]: I0123 16:42:48.448596 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" podStartSLOduration=15.448575745 podStartE2EDuration="15.448575745s" podCreationTimestamp="2026-01-23 16:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:42:48.437337464 +0000 UTC m=+1569.584579455" watchObservedRunningTime="2026-01-23 16:42:48.448575745 +0000 UTC m=+1569.595817736" Jan 23 16:42:49 crc kubenswrapper[4718]: E0123 16:42:49.273103 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="764a9a7e-61b2-4513-8f87-fc357857c90f" Jan 23 16:42:49 crc kubenswrapper[4718]: I0123 16:42:49.430301 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bca04db1-8e77-405e-b8ef-656cf882136c","Type":"ContainerStarted","Data":"78e0fc1e08ebc2e2f2e4a8669e1e8784935bc4a5f9312fd58bd86c64eb462965"} Jan 23 16:42:49 crc kubenswrapper[4718]: I0123 16:42:49.433029 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"d346ed1b-38d4-4c87-82f6-78ec3880c670","Type":"ContainerStarted","Data":"776213fecd4cab49561a950992b6bc2c42d214dcd04577d0dfea99854c2238a2"} Jan 23 16:42:49 crc kubenswrapper[4718]: I0123 16:42:49.436677 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"764a9a7e-61b2-4513-8f87-fc357857c90f","Type":"ContainerStarted","Data":"7423de286316ac2bf4e4b7c0ad9db52f353e8fc5cf626977dbfd453308413cdc"} Jan 23 16:42:49 crc kubenswrapper[4718]: E0123 16:42:49.440324 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="764a9a7e-61b2-4513-8f87-fc357857c90f" Jan 23 16:42:50 crc kubenswrapper[4718]: I0123 16:42:50.449941 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 16:42:50 crc kubenswrapper[4718]: E0123 16:42:50.455761 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="764a9a7e-61b2-4513-8f87-fc357857c90f" Jan 23 16:42:51 crc kubenswrapper[4718]: E0123 16:42:51.460775 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="764a9a7e-61b2-4513-8f87-fc357857c90f" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.219991 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.331940 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-fsdtq"] Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.332276 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" podUID="1b08d018-0695-4abc-8779-8d448c1ac2c2" containerName="dnsmasq-dns" containerID="cri-o://0f8abfdbe5d6b69433d92944bc28ddd508f31829fa8b1391ad2932a3e2b16118" gracePeriod=10 Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.465915 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-4pkvj"] Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.471243 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.482528 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-4pkvj"] Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.536855 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69dc82c8-1e85-459e-9580-cbc33c567be5-config\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.536910 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69dc82c8-1e85-459e-9580-cbc33c567be5-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.536965 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69dc82c8-1e85-459e-9580-cbc33c567be5-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.537003 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69dc82c8-1e85-459e-9580-cbc33c567be5-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.537027 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69dc82c8-1e85-459e-9580-cbc33c567be5-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.537046 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chqt8\" (UniqueName: \"kubernetes.io/projected/69dc82c8-1e85-459e-9580-cbc33c567be5-kube-api-access-chqt8\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.537114 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/69dc82c8-1e85-459e-9580-cbc33c567be5-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.540728 4718 generic.go:334] "Generic (PLEG): container finished" podID="1b08d018-0695-4abc-8779-8d448c1ac2c2" containerID="0f8abfdbe5d6b69433d92944bc28ddd508f31829fa8b1391ad2932a3e2b16118" exitCode=0 Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.541154 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" event={"ID":"1b08d018-0695-4abc-8779-8d448c1ac2c2","Type":"ContainerDied","Data":"0f8abfdbe5d6b69433d92944bc28ddd508f31829fa8b1391ad2932a3e2b16118"} Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.639340 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69dc82c8-1e85-459e-9580-cbc33c567be5-config\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.639402 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69dc82c8-1e85-459e-9580-cbc33c567be5-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.639446 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69dc82c8-1e85-459e-9580-cbc33c567be5-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.639482 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69dc82c8-1e85-459e-9580-cbc33c567be5-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.639504 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69dc82c8-1e85-459e-9580-cbc33c567be5-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.639546 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chqt8\" (UniqueName: \"kubernetes.io/projected/69dc82c8-1e85-459e-9580-cbc33c567be5-kube-api-access-chqt8\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.639597 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/69dc82c8-1e85-459e-9580-cbc33c567be5-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.640335 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69dc82c8-1e85-459e-9580-cbc33c567be5-config\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.640438 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/69dc82c8-1e85-459e-9580-cbc33c567be5-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.640852 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69dc82c8-1e85-459e-9580-cbc33c567be5-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.641120 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69dc82c8-1e85-459e-9580-cbc33c567be5-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.641309 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69dc82c8-1e85-459e-9580-cbc33c567be5-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.641910 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69dc82c8-1e85-459e-9580-cbc33c567be5-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.680939 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chqt8\" (UniqueName: \"kubernetes.io/projected/69dc82c8-1e85-459e-9580-cbc33c567be5-kube-api-access-chqt8\") pod \"dnsmasq-dns-5d75f767dc-4pkvj\" (UID: \"69dc82c8-1e85-459e-9580-cbc33c567be5\") " pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:54 crc kubenswrapper[4718]: I0123 16:42:54.841126 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.021106 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.151392 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-config\") pod \"1b08d018-0695-4abc-8779-8d448c1ac2c2\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.151537 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-dns-swift-storage-0\") pod \"1b08d018-0695-4abc-8779-8d448c1ac2c2\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.151667 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-ovsdbserver-nb\") pod \"1b08d018-0695-4abc-8779-8d448c1ac2c2\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.151748 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5cbr\" (UniqueName: \"kubernetes.io/projected/1b08d018-0695-4abc-8779-8d448c1ac2c2-kube-api-access-x5cbr\") pod \"1b08d018-0695-4abc-8779-8d448c1ac2c2\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.151841 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-ovsdbserver-sb\") pod \"1b08d018-0695-4abc-8779-8d448c1ac2c2\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.151883 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-dns-svc\") pod \"1b08d018-0695-4abc-8779-8d448c1ac2c2\" (UID: \"1b08d018-0695-4abc-8779-8d448c1ac2c2\") " Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.256223 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b08d018-0695-4abc-8779-8d448c1ac2c2-kube-api-access-x5cbr" (OuterVolumeSpecName: "kube-api-access-x5cbr") pod "1b08d018-0695-4abc-8779-8d448c1ac2c2" (UID: "1b08d018-0695-4abc-8779-8d448c1ac2c2"). InnerVolumeSpecName "kube-api-access-x5cbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.321243 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1b08d018-0695-4abc-8779-8d448c1ac2c2" (UID: "1b08d018-0695-4abc-8779-8d448c1ac2c2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.322710 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1b08d018-0695-4abc-8779-8d448c1ac2c2" (UID: "1b08d018-0695-4abc-8779-8d448c1ac2c2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.329311 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1b08d018-0695-4abc-8779-8d448c1ac2c2" (UID: "1b08d018-0695-4abc-8779-8d448c1ac2c2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.334838 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-config" (OuterVolumeSpecName: "config") pod "1b08d018-0695-4abc-8779-8d448c1ac2c2" (UID: "1b08d018-0695-4abc-8779-8d448c1ac2c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.336414 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1b08d018-0695-4abc-8779-8d448c1ac2c2" (UID: "1b08d018-0695-4abc-8779-8d448c1ac2c2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.359068 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.359113 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5cbr\" (UniqueName: \"kubernetes.io/projected/1b08d018-0695-4abc-8779-8d448c1ac2c2-kube-api-access-x5cbr\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.359126 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.359142 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.359154 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.359166 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b08d018-0695-4abc-8779-8d448c1ac2c2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.429761 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-4pkvj"] Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.627302 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" event={"ID":"1b08d018-0695-4abc-8779-8d448c1ac2c2","Type":"ContainerDied","Data":"90d748095fd3914e9ff43a829f818e9ec10a8fd0fc0c49ccbbd1224c0034cd28"} Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.627355 4718 scope.go:117] "RemoveContainer" containerID="0f8abfdbe5d6b69433d92944bc28ddd508f31829fa8b1391ad2932a3e2b16118" Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.627503 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-fsdtq" Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.650847 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" event={"ID":"69dc82c8-1e85-459e-9580-cbc33c567be5","Type":"ContainerStarted","Data":"55fcbb67578d52dee6c78f6299d3faa4b03cbd64e1dc821227a14883f3f3e938"} Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.849339 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-fsdtq"] Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.864588 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-fsdtq"] Jan 23 16:42:55 crc kubenswrapper[4718]: I0123 16:42:55.876927 4718 scope.go:117] "RemoveContainer" containerID="9ea0570e93f4aea256b10516622783bdfafc9b1709d2733181190e89d2277159" Jan 23 16:42:56 crc kubenswrapper[4718]: I0123 16:42:56.667809 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-5phqp" event={"ID":"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3","Type":"ContainerStarted","Data":"e996032703d2a23cedc05d499219975ca23fd258a96ea407eb5ca222051fccb1"} Jan 23 16:42:56 crc kubenswrapper[4718]: I0123 16:42:56.674990 4718 generic.go:334] "Generic (PLEG): container finished" podID="69dc82c8-1e85-459e-9580-cbc33c567be5" containerID="ec8f049492d380e7e9d64b941a95c14bd1fe9b01020117335784137386f5caae" exitCode=0 Jan 23 16:42:56 crc kubenswrapper[4718]: I0123 16:42:56.675070 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" event={"ID":"69dc82c8-1e85-459e-9580-cbc33c567be5","Type":"ContainerDied","Data":"ec8f049492d380e7e9d64b941a95c14bd1fe9b01020117335784137386f5caae"} Jan 23 16:42:56 crc kubenswrapper[4718]: I0123 16:42:56.710563 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-5phqp" podStartSLOduration=2.363637441 podStartE2EDuration="39.710527838s" podCreationTimestamp="2026-01-23 16:42:17 +0000 UTC" firstStartedPulling="2026-01-23 16:42:18.420003138 +0000 UTC m=+1539.567245129" lastFinishedPulling="2026-01-23 16:42:55.766893545 +0000 UTC m=+1576.914135526" observedRunningTime="2026-01-23 16:42:56.691503128 +0000 UTC m=+1577.838745159" watchObservedRunningTime="2026-01-23 16:42:56.710527838 +0000 UTC m=+1577.857769839" Jan 23 16:42:56 crc kubenswrapper[4718]: I0123 16:42:56.842879 4718 scope.go:117] "RemoveContainer" containerID="c5c885c944cb6731e48532193a4758e237470f3fee24eb33623c429f5c050009" Jan 23 16:42:56 crc kubenswrapper[4718]: I0123 16:42:56.903891 4718 scope.go:117] "RemoveContainer" containerID="792984c57cf97fb79554ae2499a8cb577255140d675b09e9e971c940c797f7c9" Jan 23 16:42:57 crc kubenswrapper[4718]: I0123 16:42:57.028140 4718 scope.go:117] "RemoveContainer" containerID="4bd3f777b687a07dc4b7d698c10c408f844582051d4a9818d0f5cd09e7e6a3f5" Jan 23 16:42:57 crc kubenswrapper[4718]: I0123 16:42:57.059834 4718 scope.go:117] "RemoveContainer" containerID="2fef545cfa3d673bfcbf9a716e78e6633575248134cfd4d0a9eb86a7ef0eacd4" Jan 23 16:42:57 crc kubenswrapper[4718]: I0123 16:42:57.109089 4718 scope.go:117] "RemoveContainer" containerID="ebca100cee04ae7b5b5f80bca0cf7e7ade8c2a8de2c13f40eed9cf4f01e2967b" Jan 23 16:42:57 crc kubenswrapper[4718]: I0123 16:42:57.166120 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b08d018-0695-4abc-8779-8d448c1ac2c2" path="/var/lib/kubelet/pods/1b08d018-0695-4abc-8779-8d448c1ac2c2/volumes" Jan 23 16:42:57 crc kubenswrapper[4718]: I0123 16:42:57.703317 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" event={"ID":"69dc82c8-1e85-459e-9580-cbc33c567be5","Type":"ContainerStarted","Data":"34f7be5e98bb51fa26ce98cdfb732dff25004467fcc9a0206b8d2b6dff331c2f"} Jan 23 16:42:57 crc kubenswrapper[4718]: I0123 16:42:57.704003 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:42:57 crc kubenswrapper[4718]: I0123 16:42:57.728108 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" podStartSLOduration=3.728073041 podStartE2EDuration="3.728073041s" podCreationTimestamp="2026-01-23 16:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:42:57.722874501 +0000 UTC m=+1578.870116492" watchObservedRunningTime="2026-01-23 16:42:57.728073041 +0000 UTC m=+1578.875315032" Jan 23 16:42:59 crc kubenswrapper[4718]: I0123 16:42:59.731064 4718 generic.go:334] "Generic (PLEG): container finished" podID="0ab31210-204e-4f0c-9aa7-ef99dc7db5c3" containerID="e996032703d2a23cedc05d499219975ca23fd258a96ea407eb5ca222051fccb1" exitCode=0 Jan 23 16:42:59 crc kubenswrapper[4718]: I0123 16:42:59.731148 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-5phqp" event={"ID":"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3","Type":"ContainerDied","Data":"e996032703d2a23cedc05d499219975ca23fd258a96ea407eb5ca222051fccb1"} Jan 23 16:43:01 crc kubenswrapper[4718]: I0123 16:43:01.238136 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-5phqp" Jan 23 16:43:01 crc kubenswrapper[4718]: I0123 16:43:01.396076 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3-combined-ca-bundle\") pod \"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3\" (UID: \"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3\") " Jan 23 16:43:01 crc kubenswrapper[4718]: I0123 16:43:01.396141 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3-config-data\") pod \"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3\" (UID: \"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3\") " Jan 23 16:43:01 crc kubenswrapper[4718]: I0123 16:43:01.396178 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhgxr\" (UniqueName: \"kubernetes.io/projected/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3-kube-api-access-bhgxr\") pod \"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3\" (UID: \"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3\") " Jan 23 16:43:01 crc kubenswrapper[4718]: I0123 16:43:01.404235 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3-kube-api-access-bhgxr" (OuterVolumeSpecName: "kube-api-access-bhgxr") pod "0ab31210-204e-4f0c-9aa7-ef99dc7db5c3" (UID: "0ab31210-204e-4f0c-9aa7-ef99dc7db5c3"). InnerVolumeSpecName "kube-api-access-bhgxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:43:01 crc kubenswrapper[4718]: I0123 16:43:01.459000 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ab31210-204e-4f0c-9aa7-ef99dc7db5c3" (UID: "0ab31210-204e-4f0c-9aa7-ef99dc7db5c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:01 crc kubenswrapper[4718]: I0123 16:43:01.501148 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:01 crc kubenswrapper[4718]: I0123 16:43:01.501189 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhgxr\" (UniqueName: \"kubernetes.io/projected/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3-kube-api-access-bhgxr\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:01 crc kubenswrapper[4718]: I0123 16:43:01.510916 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3-config-data" (OuterVolumeSpecName: "config-data") pod "0ab31210-204e-4f0c-9aa7-ef99dc7db5c3" (UID: "0ab31210-204e-4f0c-9aa7-ef99dc7db5c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:01 crc kubenswrapper[4718]: I0123 16:43:01.603086 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:01 crc kubenswrapper[4718]: I0123 16:43:01.776534 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-5phqp" event={"ID":"0ab31210-204e-4f0c-9aa7-ef99dc7db5c3","Type":"ContainerDied","Data":"ee05f19c87aa80609dce61e5fed796b3943c6dd346ec5c982761eb566e979eed"} Jan 23 16:43:01 crc kubenswrapper[4718]: I0123 16:43:01.776744 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee05f19c87aa80609dce61e5fed796b3943c6dd346ec5c982761eb566e979eed" Jan 23 16:43:01 crc kubenswrapper[4718]: I0123 16:43:01.776674 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-5phqp" Jan 23 16:43:02 crc kubenswrapper[4718]: I0123 16:43:02.426260 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 16:43:02 crc kubenswrapper[4718]: I0123 16:43:02.810727 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"764a9a7e-61b2-4513-8f87-fc357857c90f","Type":"ContainerStarted","Data":"cfa855092aaa459299db7fa1bbf5593e8a38344687f51f39b203ab4373d7878d"} Jan 23 16:43:02 crc kubenswrapper[4718]: I0123 16:43:02.843095 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.4638022720000001 podStartE2EDuration="35.843074118s" podCreationTimestamp="2026-01-23 16:42:27 +0000 UTC" firstStartedPulling="2026-01-23 16:42:27.950078938 +0000 UTC m=+1549.097320919" lastFinishedPulling="2026-01-23 16:43:02.329350774 +0000 UTC m=+1583.476592765" observedRunningTime="2026-01-23 16:43:02.832099843 +0000 UTC m=+1583.979341844" watchObservedRunningTime="2026-01-23 16:43:02.843074118 +0000 UTC m=+1583.990316109" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.316240 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6b6465d99d-xv658"] Jan 23 16:43:03 crc kubenswrapper[4718]: E0123 16:43:03.316751 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b08d018-0695-4abc-8779-8d448c1ac2c2" containerName="init" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.316764 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b08d018-0695-4abc-8779-8d448c1ac2c2" containerName="init" Jan 23 16:43:03 crc kubenswrapper[4718]: E0123 16:43:03.316775 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b08d018-0695-4abc-8779-8d448c1ac2c2" containerName="dnsmasq-dns" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.316783 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b08d018-0695-4abc-8779-8d448c1ac2c2" containerName="dnsmasq-dns" Jan 23 16:43:03 crc kubenswrapper[4718]: E0123 16:43:03.316816 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab31210-204e-4f0c-9aa7-ef99dc7db5c3" containerName="heat-db-sync" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.316822 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab31210-204e-4f0c-9aa7-ef99dc7db5c3" containerName="heat-db-sync" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.317070 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab31210-204e-4f0c-9aa7-ef99dc7db5c3" containerName="heat-db-sync" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.317083 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b08d018-0695-4abc-8779-8d448c1ac2c2" containerName="dnsmasq-dns" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.317888 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6b6465d99d-xv658" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.347922 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6b6465d99d-xv658"] Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.361666 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb4cb4d-9614-4570-a061-73f87bc9a159-config-data\") pod \"heat-engine-6b6465d99d-xv658\" (UID: \"6bb4cb4d-9614-4570-a061-73f87bc9a159\") " pod="openstack/heat-engine-6b6465d99d-xv658" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.361717 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmsj4\" (UniqueName: \"kubernetes.io/projected/6bb4cb4d-9614-4570-a061-73f87bc9a159-kube-api-access-pmsj4\") pod \"heat-engine-6b6465d99d-xv658\" (UID: \"6bb4cb4d-9614-4570-a061-73f87bc9a159\") " pod="openstack/heat-engine-6b6465d99d-xv658" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.361770 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bb4cb4d-9614-4570-a061-73f87bc9a159-config-data-custom\") pod \"heat-engine-6b6465d99d-xv658\" (UID: \"6bb4cb4d-9614-4570-a061-73f87bc9a159\") " pod="openstack/heat-engine-6b6465d99d-xv658" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.361794 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb4cb4d-9614-4570-a061-73f87bc9a159-combined-ca-bundle\") pod \"heat-engine-6b6465d99d-xv658\" (UID: \"6bb4cb4d-9614-4570-a061-73f87bc9a159\") " pod="openstack/heat-engine-6b6465d99d-xv658" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.401348 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-95db6b64d-5qj7l"] Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.410137 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.425756 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6bf6f4bd98-77tgt"] Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.428276 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.442277 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-95db6b64d-5qj7l"] Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.458996 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6bf6f4bd98-77tgt"] Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.466174 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb4cb4d-9614-4570-a061-73f87bc9a159-combined-ca-bundle\") pod \"heat-engine-6b6465d99d-xv658\" (UID: \"6bb4cb4d-9614-4570-a061-73f87bc9a159\") " pod="openstack/heat-engine-6b6465d99d-xv658" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.466224 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e6107cd-49bf-4f98-a70b-715fcdcc1535-config-data\") pod \"heat-api-95db6b64d-5qj7l\" (UID: \"6e6107cd-49bf-4f98-a70b-715fcdcc1535\") " pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.466247 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e6107cd-49bf-4f98-a70b-715fcdcc1535-public-tls-certs\") pod \"heat-api-95db6b64d-5qj7l\" (UID: \"6e6107cd-49bf-4f98-a70b-715fcdcc1535\") " pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.466347 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e6107cd-49bf-4f98-a70b-715fcdcc1535-config-data-custom\") pod \"heat-api-95db6b64d-5qj7l\" (UID: \"6e6107cd-49bf-4f98-a70b-715fcdcc1535\") " pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.466380 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6107cd-49bf-4f98-a70b-715fcdcc1535-combined-ca-bundle\") pod \"heat-api-95db6b64d-5qj7l\" (UID: \"6e6107cd-49bf-4f98-a70b-715fcdcc1535\") " pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.466419 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vck97\" (UniqueName: \"kubernetes.io/projected/6e6107cd-49bf-4f98-a70b-715fcdcc1535-kube-api-access-vck97\") pod \"heat-api-95db6b64d-5qj7l\" (UID: \"6e6107cd-49bf-4f98-a70b-715fcdcc1535\") " pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.466508 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8-config-data\") pod \"heat-cfnapi-6bf6f4bd98-77tgt\" (UID: \"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8\") " pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.466535 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8-combined-ca-bundle\") pod \"heat-cfnapi-6bf6f4bd98-77tgt\" (UID: \"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8\") " pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.466569 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e6107cd-49bf-4f98-a70b-715fcdcc1535-internal-tls-certs\") pod \"heat-api-95db6b64d-5qj7l\" (UID: \"6e6107cd-49bf-4f98-a70b-715fcdcc1535\") " pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.466596 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb4cb4d-9614-4570-a061-73f87bc9a159-config-data\") pod \"heat-engine-6b6465d99d-xv658\" (UID: \"6bb4cb4d-9614-4570-a061-73f87bc9a159\") " pod="openstack/heat-engine-6b6465d99d-xv658" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.471090 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmsj4\" (UniqueName: \"kubernetes.io/projected/6bb4cb4d-9614-4570-a061-73f87bc9a159-kube-api-access-pmsj4\") pod \"heat-engine-6b6465d99d-xv658\" (UID: \"6bb4cb4d-9614-4570-a061-73f87bc9a159\") " pod="openstack/heat-engine-6b6465d99d-xv658" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.471129 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8-config-data-custom\") pod \"heat-cfnapi-6bf6f4bd98-77tgt\" (UID: \"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8\") " pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.471188 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2tj5\" (UniqueName: \"kubernetes.io/projected/7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8-kube-api-access-z2tj5\") pod \"heat-cfnapi-6bf6f4bd98-77tgt\" (UID: \"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8\") " pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.471221 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8-public-tls-certs\") pod \"heat-cfnapi-6bf6f4bd98-77tgt\" (UID: \"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8\") " pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.471243 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8-internal-tls-certs\") pod \"heat-cfnapi-6bf6f4bd98-77tgt\" (UID: \"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8\") " pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.471266 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bb4cb4d-9614-4570-a061-73f87bc9a159-config-data-custom\") pod \"heat-engine-6b6465d99d-xv658\" (UID: \"6bb4cb4d-9614-4570-a061-73f87bc9a159\") " pod="openstack/heat-engine-6b6465d99d-xv658" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.478887 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bb4cb4d-9614-4570-a061-73f87bc9a159-config-data-custom\") pod \"heat-engine-6b6465d99d-xv658\" (UID: \"6bb4cb4d-9614-4570-a061-73f87bc9a159\") " pod="openstack/heat-engine-6b6465d99d-xv658" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.483095 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb4cb4d-9614-4570-a061-73f87bc9a159-combined-ca-bundle\") pod \"heat-engine-6b6465d99d-xv658\" (UID: \"6bb4cb4d-9614-4570-a061-73f87bc9a159\") " pod="openstack/heat-engine-6b6465d99d-xv658" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.485825 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb4cb4d-9614-4570-a061-73f87bc9a159-config-data\") pod \"heat-engine-6b6465d99d-xv658\" (UID: \"6bb4cb4d-9614-4570-a061-73f87bc9a159\") " pod="openstack/heat-engine-6b6465d99d-xv658" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.494284 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmsj4\" (UniqueName: \"kubernetes.io/projected/6bb4cb4d-9614-4570-a061-73f87bc9a159-kube-api-access-pmsj4\") pod \"heat-engine-6b6465d99d-xv658\" (UID: \"6bb4cb4d-9614-4570-a061-73f87bc9a159\") " pod="openstack/heat-engine-6b6465d99d-xv658" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.574200 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8-config-data\") pod \"heat-cfnapi-6bf6f4bd98-77tgt\" (UID: \"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8\") " pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.575187 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8-combined-ca-bundle\") pod \"heat-cfnapi-6bf6f4bd98-77tgt\" (UID: \"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8\") " pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.575241 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e6107cd-49bf-4f98-a70b-715fcdcc1535-internal-tls-certs\") pod \"heat-api-95db6b64d-5qj7l\" (UID: \"6e6107cd-49bf-4f98-a70b-715fcdcc1535\") " pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.575298 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8-config-data-custom\") pod \"heat-cfnapi-6bf6f4bd98-77tgt\" (UID: \"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8\") " pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.575432 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2tj5\" (UniqueName: \"kubernetes.io/projected/7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8-kube-api-access-z2tj5\") pod \"heat-cfnapi-6bf6f4bd98-77tgt\" (UID: \"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8\") " pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.575468 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8-public-tls-certs\") pod \"heat-cfnapi-6bf6f4bd98-77tgt\" (UID: \"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8\") " pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.575507 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8-internal-tls-certs\") pod \"heat-cfnapi-6bf6f4bd98-77tgt\" (UID: \"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8\") " pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.575548 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e6107cd-49bf-4f98-a70b-715fcdcc1535-config-data\") pod \"heat-api-95db6b64d-5qj7l\" (UID: \"6e6107cd-49bf-4f98-a70b-715fcdcc1535\") " pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.575574 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e6107cd-49bf-4f98-a70b-715fcdcc1535-public-tls-certs\") pod \"heat-api-95db6b64d-5qj7l\" (UID: \"6e6107cd-49bf-4f98-a70b-715fcdcc1535\") " pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.575716 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e6107cd-49bf-4f98-a70b-715fcdcc1535-config-data-custom\") pod \"heat-api-95db6b64d-5qj7l\" (UID: \"6e6107cd-49bf-4f98-a70b-715fcdcc1535\") " pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.575762 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6107cd-49bf-4f98-a70b-715fcdcc1535-combined-ca-bundle\") pod \"heat-api-95db6b64d-5qj7l\" (UID: \"6e6107cd-49bf-4f98-a70b-715fcdcc1535\") " pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.575815 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vck97\" (UniqueName: \"kubernetes.io/projected/6e6107cd-49bf-4f98-a70b-715fcdcc1535-kube-api-access-vck97\") pod \"heat-api-95db6b64d-5qj7l\" (UID: \"6e6107cd-49bf-4f98-a70b-715fcdcc1535\") " pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.578929 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8-config-data\") pod \"heat-cfnapi-6bf6f4bd98-77tgt\" (UID: \"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8\") " pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.581836 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8-public-tls-certs\") pod \"heat-cfnapi-6bf6f4bd98-77tgt\" (UID: \"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8\") " pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.590483 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e6107cd-49bf-4f98-a70b-715fcdcc1535-public-tls-certs\") pod \"heat-api-95db6b64d-5qj7l\" (UID: \"6e6107cd-49bf-4f98-a70b-715fcdcc1535\") " pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.592344 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8-internal-tls-certs\") pod \"heat-cfnapi-6bf6f4bd98-77tgt\" (UID: \"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8\") " pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.592848 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e6107cd-49bf-4f98-a70b-715fcdcc1535-internal-tls-certs\") pod \"heat-api-95db6b64d-5qj7l\" (UID: \"6e6107cd-49bf-4f98-a70b-715fcdcc1535\") " pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.593474 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6107cd-49bf-4f98-a70b-715fcdcc1535-combined-ca-bundle\") pod \"heat-api-95db6b64d-5qj7l\" (UID: \"6e6107cd-49bf-4f98-a70b-715fcdcc1535\") " pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.594239 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e6107cd-49bf-4f98-a70b-715fcdcc1535-config-data\") pod \"heat-api-95db6b64d-5qj7l\" (UID: \"6e6107cd-49bf-4f98-a70b-715fcdcc1535\") " pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.594998 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e6107cd-49bf-4f98-a70b-715fcdcc1535-config-data-custom\") pod \"heat-api-95db6b64d-5qj7l\" (UID: \"6e6107cd-49bf-4f98-a70b-715fcdcc1535\") " pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.595357 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8-combined-ca-bundle\") pod \"heat-cfnapi-6bf6f4bd98-77tgt\" (UID: \"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8\") " pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.596293 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8-config-data-custom\") pod \"heat-cfnapi-6bf6f4bd98-77tgt\" (UID: \"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8\") " pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.599723 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2tj5\" (UniqueName: \"kubernetes.io/projected/7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8-kube-api-access-z2tj5\") pod \"heat-cfnapi-6bf6f4bd98-77tgt\" (UID: \"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8\") " pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.616652 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vck97\" (UniqueName: \"kubernetes.io/projected/6e6107cd-49bf-4f98-a70b-715fcdcc1535-kube-api-access-vck97\") pod \"heat-api-95db6b64d-5qj7l\" (UID: \"6e6107cd-49bf-4f98-a70b-715fcdcc1535\") " pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.681517 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6b6465d99d-xv658" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.709036 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:03 crc kubenswrapper[4718]: I0123 16:43:03.755890 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:04 crc kubenswrapper[4718]: W0123 16:43:04.442895 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f75fda9_2f08_4ec3_a7c9_6d7103f4f4e8.slice/crio-bd7fb9189ecf5b0e8abaf426f941cf4742f860a2ada45af894cdbb8713ead1ec WatchSource:0}: Error finding container bd7fb9189ecf5b0e8abaf426f941cf4742f860a2ada45af894cdbb8713ead1ec: Status 404 returned error can't find the container with id bd7fb9189ecf5b0e8abaf426f941cf4742f860a2ada45af894cdbb8713ead1ec Jan 23 16:43:04 crc kubenswrapper[4718]: I0123 16:43:04.444131 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6bf6f4bd98-77tgt"] Jan 23 16:43:04 crc kubenswrapper[4718]: I0123 16:43:04.461217 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6b6465d99d-xv658"] Jan 23 16:43:04 crc kubenswrapper[4718]: I0123 16:43:04.611674 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-95db6b64d-5qj7l"] Jan 23 16:43:04 crc kubenswrapper[4718]: W0123 16:43:04.613397 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e6107cd_49bf_4f98_a70b_715fcdcc1535.slice/crio-667b09026076b444ec62e5504677da0014f5416f9c354dd2584310b88fdae07c WatchSource:0}: Error finding container 667b09026076b444ec62e5504677da0014f5416f9c354dd2584310b88fdae07c: Status 404 returned error can't find the container with id 667b09026076b444ec62e5504677da0014f5416f9c354dd2584310b88fdae07c Jan 23 16:43:04 crc kubenswrapper[4718]: I0123 16:43:04.844856 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" Jan 23 16:43:04 crc kubenswrapper[4718]: I0123 16:43:04.873659 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" event={"ID":"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8","Type":"ContainerStarted","Data":"bd7fb9189ecf5b0e8abaf426f941cf4742f860a2ada45af894cdbb8713ead1ec"} Jan 23 16:43:04 crc kubenswrapper[4718]: I0123 16:43:04.876991 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-95db6b64d-5qj7l" event={"ID":"6e6107cd-49bf-4f98-a70b-715fcdcc1535","Type":"ContainerStarted","Data":"667b09026076b444ec62e5504677da0014f5416f9c354dd2584310b88fdae07c"} Jan 23 16:43:04 crc kubenswrapper[4718]: I0123 16:43:04.901108 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6b6465d99d-xv658" event={"ID":"6bb4cb4d-9614-4570-a061-73f87bc9a159","Type":"ContainerStarted","Data":"39fa10f42fe2b85b5e87a7fedd6600237f270e8cc53b195ac3928b2a12adba06"} Jan 23 16:43:04 crc kubenswrapper[4718]: I0123 16:43:04.983010 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-lv7pq"] Jan 23 16:43:04 crc kubenswrapper[4718]: I0123 16:43:04.983503 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" podUID="b337e576-c57f-4f60-b480-2b411c2c22f7" containerName="dnsmasq-dns" containerID="cri-o://205188f4811df5d00592a5448a142ea3d242bbc2a9c44ab882800450f9333b0e" gracePeriod=10 Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.804283 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.878373 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-config\") pod \"b337e576-c57f-4f60-b480-2b411c2c22f7\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.878504 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-dns-svc\") pod \"b337e576-c57f-4f60-b480-2b411c2c22f7\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.878593 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-openstack-edpm-ipam\") pod \"b337e576-c57f-4f60-b480-2b411c2c22f7\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.878668 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-ovsdbserver-nb\") pod \"b337e576-c57f-4f60-b480-2b411c2c22f7\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.878696 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-ovsdbserver-sb\") pod \"b337e576-c57f-4f60-b480-2b411c2c22f7\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.878732 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rnsb\" (UniqueName: \"kubernetes.io/projected/b337e576-c57f-4f60-b480-2b411c2c22f7-kube-api-access-5rnsb\") pod \"b337e576-c57f-4f60-b480-2b411c2c22f7\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.878757 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-dns-swift-storage-0\") pod \"b337e576-c57f-4f60-b480-2b411c2c22f7\" (UID: \"b337e576-c57f-4f60-b480-2b411c2c22f7\") " Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.902871 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b337e576-c57f-4f60-b480-2b411c2c22f7-kube-api-access-5rnsb" (OuterVolumeSpecName: "kube-api-access-5rnsb") pod "b337e576-c57f-4f60-b480-2b411c2c22f7" (UID: "b337e576-c57f-4f60-b480-2b411c2c22f7"). InnerVolumeSpecName "kube-api-access-5rnsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.971771 4718 generic.go:334] "Generic (PLEG): container finished" podID="b337e576-c57f-4f60-b480-2b411c2c22f7" containerID="205188f4811df5d00592a5448a142ea3d242bbc2a9c44ab882800450f9333b0e" exitCode=0 Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.971877 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" event={"ID":"b337e576-c57f-4f60-b480-2b411c2c22f7","Type":"ContainerDied","Data":"205188f4811df5d00592a5448a142ea3d242bbc2a9c44ab882800450f9333b0e"} Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.971913 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" event={"ID":"b337e576-c57f-4f60-b480-2b411c2c22f7","Type":"ContainerDied","Data":"e087805037442997a0250516c5766a5f5cd97b8871b293864ab44153eb991fd4"} Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.971930 4718 scope.go:117] "RemoveContainer" containerID="205188f4811df5d00592a5448a142ea3d242bbc2a9c44ab882800450f9333b0e" Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.972115 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-lv7pq" Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.980453 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b337e576-c57f-4f60-b480-2b411c2c22f7" (UID: "b337e576-c57f-4f60-b480-2b411c2c22f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.983412 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.983454 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rnsb\" (UniqueName: \"kubernetes.io/projected/b337e576-c57f-4f60-b480-2b411c2c22f7-kube-api-access-5rnsb\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.984751 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b337e576-c57f-4f60-b480-2b411c2c22f7" (UID: "b337e576-c57f-4f60-b480-2b411c2c22f7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.985422 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6b6465d99d-xv658" event={"ID":"6bb4cb4d-9614-4570-a061-73f87bc9a159","Type":"ContainerStarted","Data":"1f04ac49ab934c11695e7cc7e94c5e70e8828423e7ac4bac5caf4c137ec7f03c"} Jan 23 16:43:05 crc kubenswrapper[4718]: I0123 16:43:05.985658 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6b6465d99d-xv658" Jan 23 16:43:06 crc kubenswrapper[4718]: I0123 16:43:06.010848 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6b6465d99d-xv658" podStartSLOduration=3.010827612 podStartE2EDuration="3.010827612s" podCreationTimestamp="2026-01-23 16:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:43:06.007718369 +0000 UTC m=+1587.154960360" watchObservedRunningTime="2026-01-23 16:43:06.010827612 +0000 UTC m=+1587.158069603" Jan 23 16:43:06 crc kubenswrapper[4718]: I0123 16:43:06.014673 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-config" (OuterVolumeSpecName: "config") pod "b337e576-c57f-4f60-b480-2b411c2c22f7" (UID: "b337e576-c57f-4f60-b480-2b411c2c22f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:43:06 crc kubenswrapper[4718]: I0123 16:43:06.028734 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "b337e576-c57f-4f60-b480-2b411c2c22f7" (UID: "b337e576-c57f-4f60-b480-2b411c2c22f7"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:43:06 crc kubenswrapper[4718]: I0123 16:43:06.062354 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b337e576-c57f-4f60-b480-2b411c2c22f7" (UID: "b337e576-c57f-4f60-b480-2b411c2c22f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:43:06 crc kubenswrapper[4718]: I0123 16:43:06.070651 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b337e576-c57f-4f60-b480-2b411c2c22f7" (UID: "b337e576-c57f-4f60-b480-2b411c2c22f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:43:06 crc kubenswrapper[4718]: I0123 16:43:06.086595 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:06 crc kubenswrapper[4718]: I0123 16:43:06.086652 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:06 crc kubenswrapper[4718]: I0123 16:43:06.086671 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:06 crc kubenswrapper[4718]: I0123 16:43:06.086685 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:06 crc kubenswrapper[4718]: I0123 16:43:06.086700 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b337e576-c57f-4f60-b480-2b411c2c22f7-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:06 crc kubenswrapper[4718]: I0123 16:43:06.316213 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-lv7pq"] Jan 23 16:43:06 crc kubenswrapper[4718]: I0123 16:43:06.353583 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-lv7pq"] Jan 23 16:43:07 crc kubenswrapper[4718]: I0123 16:43:07.157685 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b337e576-c57f-4f60-b480-2b411c2c22f7" path="/var/lib/kubelet/pods/b337e576-c57f-4f60-b480-2b411c2c22f7/volumes" Jan 23 16:43:07 crc kubenswrapper[4718]: I0123 16:43:07.404615 4718 scope.go:117] "RemoveContainer" containerID="b42a511ba18eec82ea1101b9f5fff79cf2e249a06a1b03c94c9fc5c6f5681ff5" Jan 23 16:43:07 crc kubenswrapper[4718]: I0123 16:43:07.468401 4718 scope.go:117] "RemoveContainer" containerID="205188f4811df5d00592a5448a142ea3d242bbc2a9c44ab882800450f9333b0e" Jan 23 16:43:07 crc kubenswrapper[4718]: E0123 16:43:07.468969 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"205188f4811df5d00592a5448a142ea3d242bbc2a9c44ab882800450f9333b0e\": container with ID starting with 205188f4811df5d00592a5448a142ea3d242bbc2a9c44ab882800450f9333b0e not found: ID does not exist" containerID="205188f4811df5d00592a5448a142ea3d242bbc2a9c44ab882800450f9333b0e" Jan 23 16:43:07 crc kubenswrapper[4718]: I0123 16:43:07.469018 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"205188f4811df5d00592a5448a142ea3d242bbc2a9c44ab882800450f9333b0e"} err="failed to get container status \"205188f4811df5d00592a5448a142ea3d242bbc2a9c44ab882800450f9333b0e\": rpc error: code = NotFound desc = could not find container \"205188f4811df5d00592a5448a142ea3d242bbc2a9c44ab882800450f9333b0e\": container with ID starting with 205188f4811df5d00592a5448a142ea3d242bbc2a9c44ab882800450f9333b0e not found: ID does not exist" Jan 23 16:43:07 crc kubenswrapper[4718]: I0123 16:43:07.469044 4718 scope.go:117] "RemoveContainer" containerID="b42a511ba18eec82ea1101b9f5fff79cf2e249a06a1b03c94c9fc5c6f5681ff5" Jan 23 16:43:07 crc kubenswrapper[4718]: E0123 16:43:07.469309 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b42a511ba18eec82ea1101b9f5fff79cf2e249a06a1b03c94c9fc5c6f5681ff5\": container with ID starting with b42a511ba18eec82ea1101b9f5fff79cf2e249a06a1b03c94c9fc5c6f5681ff5 not found: ID does not exist" containerID="b42a511ba18eec82ea1101b9f5fff79cf2e249a06a1b03c94c9fc5c6f5681ff5" Jan 23 16:43:07 crc kubenswrapper[4718]: I0123 16:43:07.469331 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b42a511ba18eec82ea1101b9f5fff79cf2e249a06a1b03c94c9fc5c6f5681ff5"} err="failed to get container status \"b42a511ba18eec82ea1101b9f5fff79cf2e249a06a1b03c94c9fc5c6f5681ff5\": rpc error: code = NotFound desc = could not find container \"b42a511ba18eec82ea1101b9f5fff79cf2e249a06a1b03c94c9fc5c6f5681ff5\": container with ID starting with b42a511ba18eec82ea1101b9f5fff79cf2e249a06a1b03c94c9fc5c6f5681ff5 not found: ID does not exist" Jan 23 16:43:08 crc kubenswrapper[4718]: I0123 16:43:08.012185 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" event={"ID":"7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8","Type":"ContainerStarted","Data":"c346a91481d71ef3da6efd1c74c73d6617997ed6553ce59fe5a8df870c043da7"} Jan 23 16:43:08 crc kubenswrapper[4718]: I0123 16:43:08.012563 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:08 crc kubenswrapper[4718]: I0123 16:43:08.015155 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-95db6b64d-5qj7l" event={"ID":"6e6107cd-49bf-4f98-a70b-715fcdcc1535","Type":"ContainerStarted","Data":"a9c1aff70d29494717374ae39b549cb213bc5264e6f4928ed4989f4ec59bd8aa"} Jan 23 16:43:08 crc kubenswrapper[4718]: I0123 16:43:08.015287 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:08 crc kubenswrapper[4718]: I0123 16:43:08.041323 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" podStartSLOduration=2.021908226 podStartE2EDuration="5.041304915s" podCreationTimestamp="2026-01-23 16:43:03 +0000 UTC" firstStartedPulling="2026-01-23 16:43:04.448989105 +0000 UTC m=+1585.596231096" lastFinishedPulling="2026-01-23 16:43:07.468385794 +0000 UTC m=+1588.615627785" observedRunningTime="2026-01-23 16:43:08.033547877 +0000 UTC m=+1589.180789868" watchObservedRunningTime="2026-01-23 16:43:08.041304915 +0000 UTC m=+1589.188546906" Jan 23 16:43:08 crc kubenswrapper[4718]: I0123 16:43:08.062470 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-95db6b64d-5qj7l" podStartSLOduration=2.209938963 podStartE2EDuration="5.062451031s" podCreationTimestamp="2026-01-23 16:43:03 +0000 UTC" firstStartedPulling="2026-01-23 16:43:04.620471649 +0000 UTC m=+1585.767713640" lastFinishedPulling="2026-01-23 16:43:07.472983717 +0000 UTC m=+1588.620225708" observedRunningTime="2026-01-23 16:43:08.060663563 +0000 UTC m=+1589.207905554" watchObservedRunningTime="2026-01-23 16:43:08.062451031 +0000 UTC m=+1589.209693022" Jan 23 16:43:15 crc kubenswrapper[4718]: I0123 16:43:15.182962 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-95db6b64d-5qj7l" Jan 23 16:43:15 crc kubenswrapper[4718]: I0123 16:43:15.190445 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6bf6f4bd98-77tgt" Jan 23 16:43:15 crc kubenswrapper[4718]: I0123 16:43:15.302324 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7995cbf6b7-gc2m4"] Jan 23 16:43:15 crc kubenswrapper[4718]: I0123 16:43:15.303108 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7995cbf6b7-gc2m4" podUID="f6e56f12-62cd-469a-a48a-0319680955f5" containerName="heat-api" containerID="cri-o://7ecae85aa5d677740d04bfbb74c835105846bdce9c410fba05210ebe5f307dc7" gracePeriod=60 Jan 23 16:43:15 crc kubenswrapper[4718]: I0123 16:43:15.320510 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-65d6464c6f-swdhs"] Jan 23 16:43:15 crc kubenswrapper[4718]: I0123 16:43:15.321026 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-65d6464c6f-swdhs" podUID="a7cf1cff-e480-43e4-b8df-c0b44812baab" containerName="heat-cfnapi" containerID="cri-o://d361462443f0b23545a73be88c6c0b738fd70d86cbba8ef7416e9180dc91d5b4" gracePeriod=60 Jan 23 16:43:19 crc kubenswrapper[4718]: I0123 16:43:19.380022 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-65d6464c6f-swdhs" podUID="a7cf1cff-e480-43e4-b8df-c0b44812baab" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.232:8000/healthcheck\": read tcp 10.217.0.2:57414->10.217.0.232:8000: read: connection reset by peer" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.025584 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.067361 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-7995cbf6b7-gc2m4" podUID="f6e56f12-62cd-469a-a48a-0319680955f5" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.231:8004/healthcheck\": read tcp 10.217.0.2:38996->10.217.0.231:8004: read: connection reset by peer" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.191469 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-config-data\") pod \"a7cf1cff-e480-43e4-b8df-c0b44812baab\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.191561 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-config-data-custom\") pod \"a7cf1cff-e480-43e4-b8df-c0b44812baab\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.191664 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-internal-tls-certs\") pod \"a7cf1cff-e480-43e4-b8df-c0b44812baab\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.191690 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-combined-ca-bundle\") pod \"a7cf1cff-e480-43e4-b8df-c0b44812baab\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.191763 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmkfd\" (UniqueName: \"kubernetes.io/projected/a7cf1cff-e480-43e4-b8df-c0b44812baab-kube-api-access-vmkfd\") pod \"a7cf1cff-e480-43e4-b8df-c0b44812baab\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.191914 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-public-tls-certs\") pod \"a7cf1cff-e480-43e4-b8df-c0b44812baab\" (UID: \"a7cf1cff-e480-43e4-b8df-c0b44812baab\") " Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.203497 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7cf1cff-e480-43e4-b8df-c0b44812baab-kube-api-access-vmkfd" (OuterVolumeSpecName: "kube-api-access-vmkfd") pod "a7cf1cff-e480-43e4-b8df-c0b44812baab" (UID: "a7cf1cff-e480-43e4-b8df-c0b44812baab"). InnerVolumeSpecName "kube-api-access-vmkfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.211923 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a7cf1cff-e480-43e4-b8df-c0b44812baab" (UID: "a7cf1cff-e480-43e4-b8df-c0b44812baab"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.237441 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7cf1cff-e480-43e4-b8df-c0b44812baab" (UID: "a7cf1cff-e480-43e4-b8df-c0b44812baab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.299326 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a7cf1cff-e480-43e4-b8df-c0b44812baab" (UID: "a7cf1cff-e480-43e4-b8df-c0b44812baab"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.304220 4718 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.304249 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.304260 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmkfd\" (UniqueName: \"kubernetes.io/projected/a7cf1cff-e480-43e4-b8df-c0b44812baab-kube-api-access-vmkfd\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.304277 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.411933 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a7cf1cff-e480-43e4-b8df-c0b44812baab" (UID: "a7cf1cff-e480-43e4-b8df-c0b44812baab"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.412225 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-config-data" (OuterVolumeSpecName: "config-data") pod "a7cf1cff-e480-43e4-b8df-c0b44812baab" (UID: "a7cf1cff-e480-43e4-b8df-c0b44812baab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.414271 4718 generic.go:334] "Generic (PLEG): container finished" podID="a7cf1cff-e480-43e4-b8df-c0b44812baab" containerID="d361462443f0b23545a73be88c6c0b738fd70d86cbba8ef7416e9180dc91d5b4" exitCode=0 Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.414340 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65d6464c6f-swdhs" event={"ID":"a7cf1cff-e480-43e4-b8df-c0b44812baab","Type":"ContainerDied","Data":"d361462443f0b23545a73be88c6c0b738fd70d86cbba8ef7416e9180dc91d5b4"} Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.414371 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65d6464c6f-swdhs" event={"ID":"a7cf1cff-e480-43e4-b8df-c0b44812baab","Type":"ContainerDied","Data":"cea1fb2a450d3c3f9d85fce1f51d4694d35aca22bf03c591390576341d7adf5a"} Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.414387 4718 scope.go:117] "RemoveContainer" containerID="d361462443f0b23545a73be88c6c0b738fd70d86cbba8ef7416e9180dc91d5b4" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.414555 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65d6464c6f-swdhs" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.418127 4718 generic.go:334] "Generic (PLEG): container finished" podID="f6e56f12-62cd-469a-a48a-0319680955f5" containerID="7ecae85aa5d677740d04bfbb74c835105846bdce9c410fba05210ebe5f307dc7" exitCode=0 Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.418174 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7995cbf6b7-gc2m4" event={"ID":"f6e56f12-62cd-469a-a48a-0319680955f5","Type":"ContainerDied","Data":"7ecae85aa5d677740d04bfbb74c835105846bdce9c410fba05210ebe5f307dc7"} Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.469275 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-65d6464c6f-swdhs"] Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.490740 4718 scope.go:117] "RemoveContainer" containerID="d361462443f0b23545a73be88c6c0b738fd70d86cbba8ef7416e9180dc91d5b4" Jan 23 16:43:20 crc kubenswrapper[4718]: E0123 16:43:20.505375 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d361462443f0b23545a73be88c6c0b738fd70d86cbba8ef7416e9180dc91d5b4\": container with ID starting with d361462443f0b23545a73be88c6c0b738fd70d86cbba8ef7416e9180dc91d5b4 not found: ID does not exist" containerID="d361462443f0b23545a73be88c6c0b738fd70d86cbba8ef7416e9180dc91d5b4" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.505527 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d361462443f0b23545a73be88c6c0b738fd70d86cbba8ef7416e9180dc91d5b4"} err="failed to get container status \"d361462443f0b23545a73be88c6c0b738fd70d86cbba8ef7416e9180dc91d5b4\": rpc error: code = NotFound desc = could not find container \"d361462443f0b23545a73be88c6c0b738fd70d86cbba8ef7416e9180dc91d5b4\": container with ID starting with d361462443f0b23545a73be88c6c0b738fd70d86cbba8ef7416e9180dc91d5b4 not found: ID does not exist" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.510869 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-65d6464c6f-swdhs"] Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.517454 4718 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.517499 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7cf1cff-e480-43e4-b8df-c0b44812baab-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.745575 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.929552 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-public-tls-certs\") pod \"f6e56f12-62cd-469a-a48a-0319680955f5\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.929919 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brb42\" (UniqueName: \"kubernetes.io/projected/f6e56f12-62cd-469a-a48a-0319680955f5-kube-api-access-brb42\") pod \"f6e56f12-62cd-469a-a48a-0319680955f5\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.930019 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-internal-tls-certs\") pod \"f6e56f12-62cd-469a-a48a-0319680955f5\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.930234 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-config-data-custom\") pod \"f6e56f12-62cd-469a-a48a-0319680955f5\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.930328 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-combined-ca-bundle\") pod \"f6e56f12-62cd-469a-a48a-0319680955f5\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.930393 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-config-data\") pod \"f6e56f12-62cd-469a-a48a-0319680955f5\" (UID: \"f6e56f12-62cd-469a-a48a-0319680955f5\") " Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.934904 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f6e56f12-62cd-469a-a48a-0319680955f5" (UID: "f6e56f12-62cd-469a-a48a-0319680955f5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.938311 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6e56f12-62cd-469a-a48a-0319680955f5-kube-api-access-brb42" (OuterVolumeSpecName: "kube-api-access-brb42") pod "f6e56f12-62cd-469a-a48a-0319680955f5" (UID: "f6e56f12-62cd-469a-a48a-0319680955f5"). InnerVolumeSpecName "kube-api-access-brb42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.977114 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6e56f12-62cd-469a-a48a-0319680955f5" (UID: "f6e56f12-62cd-469a-a48a-0319680955f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:20 crc kubenswrapper[4718]: I0123 16:43:20.999249 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-config-data" (OuterVolumeSpecName: "config-data") pod "f6e56f12-62cd-469a-a48a-0319680955f5" (UID: "f6e56f12-62cd-469a-a48a-0319680955f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:21 crc kubenswrapper[4718]: I0123 16:43:21.002204 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f6e56f12-62cd-469a-a48a-0319680955f5" (UID: "f6e56f12-62cd-469a-a48a-0319680955f5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:21 crc kubenswrapper[4718]: I0123 16:43:21.007357 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f6e56f12-62cd-469a-a48a-0319680955f5" (UID: "f6e56f12-62cd-469a-a48a-0319680955f5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:21 crc kubenswrapper[4718]: I0123 16:43:21.033707 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:21 crc kubenswrapper[4718]: I0123 16:43:21.033751 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:21 crc kubenswrapper[4718]: I0123 16:43:21.033763 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:21 crc kubenswrapper[4718]: I0123 16:43:21.033771 4718 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:21 crc kubenswrapper[4718]: I0123 16:43:21.033781 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brb42\" (UniqueName: \"kubernetes.io/projected/f6e56f12-62cd-469a-a48a-0319680955f5-kube-api-access-brb42\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:21 crc kubenswrapper[4718]: I0123 16:43:21.033792 4718 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e56f12-62cd-469a-a48a-0319680955f5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:21 crc kubenswrapper[4718]: I0123 16:43:21.154967 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7cf1cff-e480-43e4-b8df-c0b44812baab" path="/var/lib/kubelet/pods/a7cf1cff-e480-43e4-b8df-c0b44812baab/volumes" Jan 23 16:43:21 crc kubenswrapper[4718]: I0123 16:43:21.431260 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7995cbf6b7-gc2m4" event={"ID":"f6e56f12-62cd-469a-a48a-0319680955f5","Type":"ContainerDied","Data":"d4bc62a26a9900c12b07758bebd4999e28f01e67aaea54f3f1164fa296d995f7"} Jan 23 16:43:21 crc kubenswrapper[4718]: I0123 16:43:21.431326 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7995cbf6b7-gc2m4" Jan 23 16:43:21 crc kubenswrapper[4718]: I0123 16:43:21.431333 4718 scope.go:117] "RemoveContainer" containerID="7ecae85aa5d677740d04bfbb74c835105846bdce9c410fba05210ebe5f307dc7" Jan 23 16:43:21 crc kubenswrapper[4718]: I0123 16:43:21.435825 4718 generic.go:334] "Generic (PLEG): container finished" podID="bca04db1-8e77-405e-b8ef-656cf882136c" containerID="78e0fc1e08ebc2e2f2e4a8669e1e8784935bc4a5f9312fd58bd86c64eb462965" exitCode=0 Jan 23 16:43:21 crc kubenswrapper[4718]: I0123 16:43:21.435886 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bca04db1-8e77-405e-b8ef-656cf882136c","Type":"ContainerDied","Data":"78e0fc1e08ebc2e2f2e4a8669e1e8784935bc4a5f9312fd58bd86c64eb462965"} Jan 23 16:43:21 crc kubenswrapper[4718]: I0123 16:43:21.438742 4718 generic.go:334] "Generic (PLEG): container finished" podID="d346ed1b-38d4-4c87-82f6-78ec3880c670" containerID="776213fecd4cab49561a950992b6bc2c42d214dcd04577d0dfea99854c2238a2" exitCode=0 Jan 23 16:43:21 crc kubenswrapper[4718]: I0123 16:43:21.438794 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"d346ed1b-38d4-4c87-82f6-78ec3880c670","Type":"ContainerDied","Data":"776213fecd4cab49561a950992b6bc2c42d214dcd04577d0dfea99854c2238a2"} Jan 23 16:43:21 crc kubenswrapper[4718]: I0123 16:43:21.537119 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7995cbf6b7-gc2m4"] Jan 23 16:43:21 crc kubenswrapper[4718]: I0123 16:43:21.559179 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7995cbf6b7-gc2m4"] Jan 23 16:43:22 crc kubenswrapper[4718]: I0123 16:43:22.452973 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bca04db1-8e77-405e-b8ef-656cf882136c","Type":"ContainerStarted","Data":"dab74dfd6f5604554bec29f57a6c1f06c898fd94f9c93803548e429087e281ad"} Jan 23 16:43:22 crc kubenswrapper[4718]: I0123 16:43:22.454718 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:43:22 crc kubenswrapper[4718]: I0123 16:43:22.456994 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"d346ed1b-38d4-4c87-82f6-78ec3880c670","Type":"ContainerStarted","Data":"e33fb20993b909636bbd52514936d0525a82ea4674014cbef16910a16fe73c4a"} Jan 23 16:43:22 crc kubenswrapper[4718]: I0123 16:43:22.458410 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 23 16:43:22 crc kubenswrapper[4718]: I0123 16:43:22.484110 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.48408919 podStartE2EDuration="40.48408919s" podCreationTimestamp="2026-01-23 16:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:43:22.480478694 +0000 UTC m=+1603.627720695" watchObservedRunningTime="2026-01-23 16:43:22.48408919 +0000 UTC m=+1603.631331181" Jan 23 16:43:22 crc kubenswrapper[4718]: I0123 16:43:22.510344 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=40.510321703 podStartE2EDuration="40.510321703s" podCreationTimestamp="2026-01-23 16:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:43:22.510002434 +0000 UTC m=+1603.657244415" watchObservedRunningTime="2026-01-23 16:43:22.510321703 +0000 UTC m=+1603.657563694" Jan 23 16:43:23 crc kubenswrapper[4718]: I0123 16:43:23.154840 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6e56f12-62cd-469a-a48a-0319680955f5" path="/var/lib/kubelet/pods/f6e56f12-62cd-469a-a48a-0319680955f5/volumes" Jan 23 16:43:23 crc kubenswrapper[4718]: I0123 16:43:23.727081 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6b6465d99d-xv658" Jan 23 16:43:23 crc kubenswrapper[4718]: I0123 16:43:23.784088 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7b6b6b4c97-tcz2k"] Jan 23 16:43:23 crc kubenswrapper[4718]: I0123 16:43:23.786139 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7b6b6b4c97-tcz2k" podUID="0928142a-4c90-4e86-9e29-8e7abb282cf0" containerName="heat-engine" containerID="cri-o://7f4f5852b93dc917c318f9951829c4ca61d328d1da46b2fea64c7fbf55d262ce" gracePeriod=60 Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.803372 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k"] Jan 23 16:43:24 crc kubenswrapper[4718]: E0123 16:43:24.804294 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b337e576-c57f-4f60-b480-2b411c2c22f7" containerName="dnsmasq-dns" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.804309 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b337e576-c57f-4f60-b480-2b411c2c22f7" containerName="dnsmasq-dns" Jan 23 16:43:24 crc kubenswrapper[4718]: E0123 16:43:24.804318 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7cf1cff-e480-43e4-b8df-c0b44812baab" containerName="heat-cfnapi" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.804324 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7cf1cff-e480-43e4-b8df-c0b44812baab" containerName="heat-cfnapi" Jan 23 16:43:24 crc kubenswrapper[4718]: E0123 16:43:24.804339 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b337e576-c57f-4f60-b480-2b411c2c22f7" containerName="init" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.804345 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b337e576-c57f-4f60-b480-2b411c2c22f7" containerName="init" Jan 23 16:43:24 crc kubenswrapper[4718]: E0123 16:43:24.804372 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e56f12-62cd-469a-a48a-0319680955f5" containerName="heat-api" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.804386 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e56f12-62cd-469a-a48a-0319680955f5" containerName="heat-api" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.804592 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6e56f12-62cd-469a-a48a-0319680955f5" containerName="heat-api" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.804618 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b337e576-c57f-4f60-b480-2b411c2c22f7" containerName="dnsmasq-dns" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.804643 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7cf1cff-e480-43e4-b8df-c0b44812baab" containerName="heat-cfnapi" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.805433 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.808310 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.808590 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.809664 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.809996 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.815374 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k"] Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.847696 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d9a564a-3bb5-421a-a861-721b16ae1adc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n777k\" (UID: \"7d9a564a-3bb5-421a-a861-721b16ae1adc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.848043 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d9a564a-3bb5-421a-a861-721b16ae1adc-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n777k\" (UID: \"7d9a564a-3bb5-421a-a861-721b16ae1adc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.848179 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d9a564a-3bb5-421a-a861-721b16ae1adc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n777k\" (UID: \"7d9a564a-3bb5-421a-a861-721b16ae1adc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.848258 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgfj6\" (UniqueName: \"kubernetes.io/projected/7d9a564a-3bb5-421a-a861-721b16ae1adc-kube-api-access-rgfj6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n777k\" (UID: \"7d9a564a-3bb5-421a-a861-721b16ae1adc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.950735 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d9a564a-3bb5-421a-a861-721b16ae1adc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n777k\" (UID: \"7d9a564a-3bb5-421a-a861-721b16ae1adc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.950881 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d9a564a-3bb5-421a-a861-721b16ae1adc-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n777k\" (UID: \"7d9a564a-3bb5-421a-a861-721b16ae1adc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.950919 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d9a564a-3bb5-421a-a861-721b16ae1adc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n777k\" (UID: \"7d9a564a-3bb5-421a-a861-721b16ae1adc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.950956 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgfj6\" (UniqueName: \"kubernetes.io/projected/7d9a564a-3bb5-421a-a861-721b16ae1adc-kube-api-access-rgfj6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n777k\" (UID: \"7d9a564a-3bb5-421a-a861-721b16ae1adc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.956934 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d9a564a-3bb5-421a-a861-721b16ae1adc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n777k\" (UID: \"7d9a564a-3bb5-421a-a861-721b16ae1adc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.961496 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d9a564a-3bb5-421a-a861-721b16ae1adc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n777k\" (UID: \"7d9a564a-3bb5-421a-a861-721b16ae1adc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.970131 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d9a564a-3bb5-421a-a861-721b16ae1adc-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n777k\" (UID: \"7d9a564a-3bb5-421a-a861-721b16ae1adc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" Jan 23 16:43:24 crc kubenswrapper[4718]: I0123 16:43:24.997558 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgfj6\" (UniqueName: \"kubernetes.io/projected/7d9a564a-3bb5-421a-a861-721b16ae1adc-kube-api-access-rgfj6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n777k\" (UID: \"7d9a564a-3bb5-421a-a861-721b16ae1adc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" Jan 23 16:43:25 crc kubenswrapper[4718]: I0123 16:43:25.124304 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" Jan 23 16:43:25 crc kubenswrapper[4718]: I0123 16:43:25.912825 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k"] Jan 23 16:43:26 crc kubenswrapper[4718]: I0123 16:43:26.506513 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" event={"ID":"7d9a564a-3bb5-421a-a861-721b16ae1adc","Type":"ContainerStarted","Data":"f4538f9ba535dec6121a816c12870008030d378a0fefbcf16e6371b0f776d57c"} Jan 23 16:43:27 crc kubenswrapper[4718]: I0123 16:43:27.622420 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-r8tbd"] Jan 23 16:43:27 crc kubenswrapper[4718]: I0123 16:43:27.634579 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-r8tbd"] Jan 23 16:43:27 crc kubenswrapper[4718]: I0123 16:43:27.705049 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-5j2qd"] Jan 23 16:43:27 crc kubenswrapper[4718]: I0123 16:43:27.707260 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-5j2qd" Jan 23 16:43:27 crc kubenswrapper[4718]: I0123 16:43:27.713482 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 16:43:27 crc kubenswrapper[4718]: I0123 16:43:27.718256 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-5j2qd"] Jan 23 16:43:27 crc kubenswrapper[4718]: I0123 16:43:27.825518 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6txn\" (UniqueName: \"kubernetes.io/projected/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-kube-api-access-c6txn\") pod \"aodh-db-sync-5j2qd\" (UID: \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\") " pod="openstack/aodh-db-sync-5j2qd" Jan 23 16:43:27 crc kubenswrapper[4718]: I0123 16:43:27.825838 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-config-data\") pod \"aodh-db-sync-5j2qd\" (UID: \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\") " pod="openstack/aodh-db-sync-5j2qd" Jan 23 16:43:27 crc kubenswrapper[4718]: I0123 16:43:27.826098 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-scripts\") pod \"aodh-db-sync-5j2qd\" (UID: \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\") " pod="openstack/aodh-db-sync-5j2qd" Jan 23 16:43:27 crc kubenswrapper[4718]: I0123 16:43:27.826241 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-combined-ca-bundle\") pod \"aodh-db-sync-5j2qd\" (UID: \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\") " pod="openstack/aodh-db-sync-5j2qd" Jan 23 16:43:27 crc kubenswrapper[4718]: I0123 16:43:27.928206 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6txn\" (UniqueName: \"kubernetes.io/projected/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-kube-api-access-c6txn\") pod \"aodh-db-sync-5j2qd\" (UID: \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\") " pod="openstack/aodh-db-sync-5j2qd" Jan 23 16:43:27 crc kubenswrapper[4718]: I0123 16:43:27.928566 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-config-data\") pod \"aodh-db-sync-5j2qd\" (UID: \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\") " pod="openstack/aodh-db-sync-5j2qd" Jan 23 16:43:27 crc kubenswrapper[4718]: I0123 16:43:27.928775 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-scripts\") pod \"aodh-db-sync-5j2qd\" (UID: \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\") " pod="openstack/aodh-db-sync-5j2qd" Jan 23 16:43:27 crc kubenswrapper[4718]: I0123 16:43:27.928929 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-combined-ca-bundle\") pod \"aodh-db-sync-5j2qd\" (UID: \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\") " pod="openstack/aodh-db-sync-5j2qd" Jan 23 16:43:27 crc kubenswrapper[4718]: I0123 16:43:27.933971 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-scripts\") pod \"aodh-db-sync-5j2qd\" (UID: \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\") " pod="openstack/aodh-db-sync-5j2qd" Jan 23 16:43:27 crc kubenswrapper[4718]: I0123 16:43:27.935708 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-config-data\") pod \"aodh-db-sync-5j2qd\" (UID: \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\") " pod="openstack/aodh-db-sync-5j2qd" Jan 23 16:43:27 crc kubenswrapper[4718]: I0123 16:43:27.953372 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-combined-ca-bundle\") pod \"aodh-db-sync-5j2qd\" (UID: \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\") " pod="openstack/aodh-db-sync-5j2qd" Jan 23 16:43:27 crc kubenswrapper[4718]: I0123 16:43:27.957173 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6txn\" (UniqueName: \"kubernetes.io/projected/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-kube-api-access-c6txn\") pod \"aodh-db-sync-5j2qd\" (UID: \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\") " pod="openstack/aodh-db-sync-5j2qd" Jan 23 16:43:28 crc kubenswrapper[4718]: I0123 16:43:28.064290 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-5j2qd" Jan 23 16:43:28 crc kubenswrapper[4718]: I0123 16:43:28.674026 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-5j2qd"] Jan 23 16:43:28 crc kubenswrapper[4718]: W0123 16:43:28.684056 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8fc39ee_d9ec_43b5_89a6_4a8e31a4f6c4.slice/crio-072cf62e49ab232e5f7ca44d30a186b195a931576aa00484ceee5a66bbb750db WatchSource:0}: Error finding container 072cf62e49ab232e5f7ca44d30a186b195a931576aa00484ceee5a66bbb750db: Status 404 returned error can't find the container with id 072cf62e49ab232e5f7ca44d30a186b195a931576aa00484ceee5a66bbb750db Jan 23 16:43:29 crc kubenswrapper[4718]: I0123 16:43:29.159721 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e13056ae-8539-4c9e-bb89-c62a84bd3446" path="/var/lib/kubelet/pods/e13056ae-8539-4c9e-bb89-c62a84bd3446/volumes" Jan 23 16:43:29 crc kubenswrapper[4718]: I0123 16:43:29.581324 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-5j2qd" event={"ID":"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4","Type":"ContainerStarted","Data":"072cf62e49ab232e5f7ca44d30a186b195a931576aa00484ceee5a66bbb750db"} Jan 23 16:43:33 crc kubenswrapper[4718]: I0123 16:43:33.043809 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 16:43:33 crc kubenswrapper[4718]: I0123 16:43:33.085894 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 23 16:43:33 crc kubenswrapper[4718]: I0123 16:43:33.168901 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 23 16:43:33 crc kubenswrapper[4718]: E0123 16:43:33.400964 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7f4f5852b93dc917c318f9951829c4ca61d328d1da46b2fea64c7fbf55d262ce" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 23 16:43:33 crc kubenswrapper[4718]: E0123 16:43:33.403179 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7f4f5852b93dc917c318f9951829c4ca61d328d1da46b2fea64c7fbf55d262ce" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 23 16:43:33 crc kubenswrapper[4718]: E0123 16:43:33.404445 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7f4f5852b93dc917c318f9951829c4ca61d328d1da46b2fea64c7fbf55d262ce" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 23 16:43:33 crc kubenswrapper[4718]: E0123 16:43:33.404479 4718 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7b6b6b4c97-tcz2k" podUID="0928142a-4c90-4e86-9e29-8e7abb282cf0" containerName="heat-engine" Jan 23 16:43:37 crc kubenswrapper[4718]: I0123 16:43:37.813549 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="39c8c979-1e2b-4757-9b14-3526451859e3" containerName="rabbitmq" containerID="cri-o://2d9a0fa0a267e3b9445c55b61de147cb4077e12a319d79f63c8fb73adcd42b82" gracePeriod=604796 Jan 23 16:43:39 crc kubenswrapper[4718]: I0123 16:43:39.721526 4718 generic.go:334] "Generic (PLEG): container finished" podID="0928142a-4c90-4e86-9e29-8e7abb282cf0" containerID="7f4f5852b93dc917c318f9951829c4ca61d328d1da46b2fea64c7fbf55d262ce" exitCode=0 Jan 23 16:43:39 crc kubenswrapper[4718]: I0123 16:43:39.721808 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b6b6b4c97-tcz2k" event={"ID":"0928142a-4c90-4e86-9e29-8e7abb282cf0","Type":"ContainerDied","Data":"7f4f5852b93dc917c318f9951829c4ca61d328d1da46b2fea64c7fbf55d262ce"} Jan 23 16:43:40 crc kubenswrapper[4718]: I0123 16:43:40.898376 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="39c8c979-1e2b-4757-9b14-3526451859e3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 23 16:43:41 crc kubenswrapper[4718]: E0123 16:43:41.486242 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 23 16:43:41 crc kubenswrapper[4718]: E0123 16:43:41.486863 4718 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 23 16:43:41 crc kubenswrapper[4718]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 23 16:43:41 crc kubenswrapper[4718]: - hosts: all Jan 23 16:43:41 crc kubenswrapper[4718]: strategy: linear Jan 23 16:43:41 crc kubenswrapper[4718]: tasks: Jan 23 16:43:41 crc kubenswrapper[4718]: - name: Enable podified-repos Jan 23 16:43:41 crc kubenswrapper[4718]: become: true Jan 23 16:43:41 crc kubenswrapper[4718]: ansible.builtin.shell: | Jan 23 16:43:41 crc kubenswrapper[4718]: set -euxo pipefail Jan 23 16:43:41 crc kubenswrapper[4718]: pushd /var/tmp Jan 23 16:43:41 crc kubenswrapper[4718]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Jan 23 16:43:41 crc kubenswrapper[4718]: pushd repo-setup-main Jan 23 16:43:41 crc kubenswrapper[4718]: python3 -m venv ./venv Jan 23 16:43:41 crc kubenswrapper[4718]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Jan 23 16:43:41 crc kubenswrapper[4718]: ./venv/bin/repo-setup current-podified -b antelope Jan 23 16:43:41 crc kubenswrapper[4718]: popd Jan 23 16:43:41 crc kubenswrapper[4718]: rm -rf repo-setup-main Jan 23 16:43:41 crc kubenswrapper[4718]: Jan 23 16:43:41 crc kubenswrapper[4718]: Jan 23 16:43:41 crc kubenswrapper[4718]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 23 16:43:41 crc kubenswrapper[4718]: edpm_override_hosts: openstack-edpm-ipam Jan 23 16:43:41 crc kubenswrapper[4718]: edpm_service_type: repo-setup Jan 23 16:43:41 crc kubenswrapper[4718]: Jan 23 16:43:41 crc kubenswrapper[4718]: Jan 23 16:43:41 crc kubenswrapper[4718]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgfj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-n777k_openstack(7d9a564a-3bb5-421a-a861-721b16ae1adc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 23 16:43:41 crc kubenswrapper[4718]: > logger="UnhandledError" Jan 23 16:43:41 crc kubenswrapper[4718]: E0123 16:43:41.489516 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" podUID="7d9a564a-3bb5-421a-a861-721b16ae1adc" Jan 23 16:43:41 crc kubenswrapper[4718]: I0123 16:43:41.510545 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 16:43:41 crc kubenswrapper[4718]: E0123 16:43:41.749938 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" podUID="7d9a564a-3bb5-421a-a861-721b16ae1adc" Jan 23 16:43:41 crc kubenswrapper[4718]: I0123 16:43:41.987277 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b6b6b4c97-tcz2k" Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.091180 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0928142a-4c90-4e86-9e29-8e7abb282cf0-config-data\") pod \"0928142a-4c90-4e86-9e29-8e7abb282cf0\" (UID: \"0928142a-4c90-4e86-9e29-8e7abb282cf0\") " Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.091327 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2g4f\" (UniqueName: \"kubernetes.io/projected/0928142a-4c90-4e86-9e29-8e7abb282cf0-kube-api-access-n2g4f\") pod \"0928142a-4c90-4e86-9e29-8e7abb282cf0\" (UID: \"0928142a-4c90-4e86-9e29-8e7abb282cf0\") " Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.091614 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0928142a-4c90-4e86-9e29-8e7abb282cf0-config-data-custom\") pod \"0928142a-4c90-4e86-9e29-8e7abb282cf0\" (UID: \"0928142a-4c90-4e86-9e29-8e7abb282cf0\") " Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.091652 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0928142a-4c90-4e86-9e29-8e7abb282cf0-combined-ca-bundle\") pod \"0928142a-4c90-4e86-9e29-8e7abb282cf0\" (UID: \"0928142a-4c90-4e86-9e29-8e7abb282cf0\") " Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.106809 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0928142a-4c90-4e86-9e29-8e7abb282cf0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0928142a-4c90-4e86-9e29-8e7abb282cf0" (UID: "0928142a-4c90-4e86-9e29-8e7abb282cf0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.107250 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0928142a-4c90-4e86-9e29-8e7abb282cf0-kube-api-access-n2g4f" (OuterVolumeSpecName: "kube-api-access-n2g4f") pod "0928142a-4c90-4e86-9e29-8e7abb282cf0" (UID: "0928142a-4c90-4e86-9e29-8e7abb282cf0"). InnerVolumeSpecName "kube-api-access-n2g4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.154832 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0928142a-4c90-4e86-9e29-8e7abb282cf0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0928142a-4c90-4e86-9e29-8e7abb282cf0" (UID: "0928142a-4c90-4e86-9e29-8e7abb282cf0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.162476 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0928142a-4c90-4e86-9e29-8e7abb282cf0-config-data" (OuterVolumeSpecName: "config-data") pod "0928142a-4c90-4e86-9e29-8e7abb282cf0" (UID: "0928142a-4c90-4e86-9e29-8e7abb282cf0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.199776 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0928142a-4c90-4e86-9e29-8e7abb282cf0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.200098 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0928142a-4c90-4e86-9e29-8e7abb282cf0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.200417 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0928142a-4c90-4e86-9e29-8e7abb282cf0-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.200562 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2g4f\" (UniqueName: \"kubernetes.io/projected/0928142a-4c90-4e86-9e29-8e7abb282cf0-kube-api-access-n2g4f\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.759683 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b6b6b4c97-tcz2k" Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.759702 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b6b6b4c97-tcz2k" event={"ID":"0928142a-4c90-4e86-9e29-8e7abb282cf0","Type":"ContainerDied","Data":"740cc86e9d97175ad2666e4f2a6a09ac0bbe3b0bd533954fa980f9ab9d240731"} Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.759872 4718 scope.go:117] "RemoveContainer" containerID="7f4f5852b93dc917c318f9951829c4ca61d328d1da46b2fea64c7fbf55d262ce" Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.763033 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-5j2qd" event={"ID":"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4","Type":"ContainerStarted","Data":"fa9f9f05c388113db8d8f34bfc6d659790fd3bed01a9cc36ba95a25f4b0f8ce2"} Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.809022 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-5j2qd" podStartSLOduration=2.989580113 podStartE2EDuration="15.808987916s" podCreationTimestamp="2026-01-23 16:43:27 +0000 UTC" firstStartedPulling="2026-01-23 16:43:28.687103787 +0000 UTC m=+1609.834345778" lastFinishedPulling="2026-01-23 16:43:41.50651158 +0000 UTC m=+1622.653753581" observedRunningTime="2026-01-23 16:43:42.793361958 +0000 UTC m=+1623.940603959" watchObservedRunningTime="2026-01-23 16:43:42.808987916 +0000 UTC m=+1623.956229907" Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.838374 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7b6b6b4c97-tcz2k"] Jan 23 16:43:42 crc kubenswrapper[4718]: I0123 16:43:42.854418 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7b6b6b4c97-tcz2k"] Jan 23 16:43:43 crc kubenswrapper[4718]: I0123 16:43:43.156857 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0928142a-4c90-4e86-9e29-8e7abb282cf0" path="/var/lib/kubelet/pods/0928142a-4c90-4e86-9e29-8e7abb282cf0/volumes" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.649793 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.804912 4718 generic.go:334] "Generic (PLEG): container finished" podID="39c8c979-1e2b-4757-9b14-3526451859e3" containerID="2d9a0fa0a267e3b9445c55b61de147cb4077e12a319d79f63c8fb73adcd42b82" exitCode=0 Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.804975 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"39c8c979-1e2b-4757-9b14-3526451859e3","Type":"ContainerDied","Data":"2d9a0fa0a267e3b9445c55b61de147cb4077e12a319d79f63c8fb73adcd42b82"} Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.805051 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"39c8c979-1e2b-4757-9b14-3526451859e3","Type":"ContainerDied","Data":"8e412a0fa4352cf24a6c3919a234a36006352c8e870b7a5044ccc40c63b392bb"} Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.805076 4718 scope.go:117] "RemoveContainer" containerID="2d9a0fa0a267e3b9445c55b61de147cb4077e12a319d79f63c8fb73adcd42b82" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.805077 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.810120 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/39c8c979-1e2b-4757-9b14-3526451859e3-pod-info\") pod \"39c8c979-1e2b-4757-9b14-3526451859e3\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.811145 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\") pod \"39c8c979-1e2b-4757-9b14-3526451859e3\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.811243 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sjd4\" (UniqueName: \"kubernetes.io/projected/39c8c979-1e2b-4757-9b14-3526451859e3-kube-api-access-2sjd4\") pod \"39c8c979-1e2b-4757-9b14-3526451859e3\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.811324 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-plugins\") pod \"39c8c979-1e2b-4757-9b14-3526451859e3\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.811376 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-tls\") pod \"39c8c979-1e2b-4757-9b14-3526451859e3\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.811405 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/39c8c979-1e2b-4757-9b14-3526451859e3-plugins-conf\") pod \"39c8c979-1e2b-4757-9b14-3526451859e3\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.812074 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39c8c979-1e2b-4757-9b14-3526451859e3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "39c8c979-1e2b-4757-9b14-3526451859e3" (UID: "39c8c979-1e2b-4757-9b14-3526451859e3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.813008 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39c8c979-1e2b-4757-9b14-3526451859e3-config-data\") pod \"39c8c979-1e2b-4757-9b14-3526451859e3\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.813148 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/39c8c979-1e2b-4757-9b14-3526451859e3-erlang-cookie-secret\") pod \"39c8c979-1e2b-4757-9b14-3526451859e3\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.813150 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "39c8c979-1e2b-4757-9b14-3526451859e3" (UID: "39c8c979-1e2b-4757-9b14-3526451859e3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.813216 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/39c8c979-1e2b-4757-9b14-3526451859e3-server-conf\") pod \"39c8c979-1e2b-4757-9b14-3526451859e3\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.813255 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-confd\") pod \"39c8c979-1e2b-4757-9b14-3526451859e3\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.813336 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-erlang-cookie\") pod \"39c8c979-1e2b-4757-9b14-3526451859e3\" (UID: \"39c8c979-1e2b-4757-9b14-3526451859e3\") " Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.815443 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.815468 4718 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/39c8c979-1e2b-4757-9b14-3526451859e3-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.818916 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "39c8c979-1e2b-4757-9b14-3526451859e3" (UID: "39c8c979-1e2b-4757-9b14-3526451859e3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.819495 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c8c979-1e2b-4757-9b14-3526451859e3-kube-api-access-2sjd4" (OuterVolumeSpecName: "kube-api-access-2sjd4") pod "39c8c979-1e2b-4757-9b14-3526451859e3" (UID: "39c8c979-1e2b-4757-9b14-3526451859e3"). InnerVolumeSpecName "kube-api-access-2sjd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.819910 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/39c8c979-1e2b-4757-9b14-3526451859e3-pod-info" (OuterVolumeSpecName: "pod-info") pod "39c8c979-1e2b-4757-9b14-3526451859e3" (UID: "39c8c979-1e2b-4757-9b14-3526451859e3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.822364 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "39c8c979-1e2b-4757-9b14-3526451859e3" (UID: "39c8c979-1e2b-4757-9b14-3526451859e3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.822915 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c8c979-1e2b-4757-9b14-3526451859e3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "39c8c979-1e2b-4757-9b14-3526451859e3" (UID: "39c8c979-1e2b-4757-9b14-3526451859e3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.840295 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6" (OuterVolumeSpecName: "persistence") pod "39c8c979-1e2b-4757-9b14-3526451859e3" (UID: "39c8c979-1e2b-4757-9b14-3526451859e3"). InnerVolumeSpecName "pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.852420 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39c8c979-1e2b-4757-9b14-3526451859e3-config-data" (OuterVolumeSpecName: "config-data") pod "39c8c979-1e2b-4757-9b14-3526451859e3" (UID: "39c8c979-1e2b-4757-9b14-3526451859e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.904667 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39c8c979-1e2b-4757-9b14-3526451859e3-server-conf" (OuterVolumeSpecName: "server-conf") pod "39c8c979-1e2b-4757-9b14-3526451859e3" (UID: "39c8c979-1e2b-4757-9b14-3526451859e3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.918481 4718 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/39c8c979-1e2b-4757-9b14-3526451859e3-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.918570 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\") on node \"crc\" " Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.918589 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sjd4\" (UniqueName: \"kubernetes.io/projected/39c8c979-1e2b-4757-9b14-3526451859e3-kube-api-access-2sjd4\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.918604 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.918617 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39c8c979-1e2b-4757-9b14-3526451859e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.918655 4718 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/39c8c979-1e2b-4757-9b14-3526451859e3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.918669 4718 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/39c8c979-1e2b-4757-9b14-3526451859e3-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.918681 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.975665 4718 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.975861 4718 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6") on node "crc" Jan 23 16:43:44 crc kubenswrapper[4718]: I0123 16:43:44.993227 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "39c8c979-1e2b-4757-9b14-3526451859e3" (UID: "39c8c979-1e2b-4757-9b14-3526451859e3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.005105 4718 scope.go:117] "RemoveContainer" containerID="4fb04f6ffcae1daefee38a46630a8461b943b051c50de755500c97335de1b9db" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.022022 4718 reconciler_common.go:293] "Volume detached for volume \"pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.022082 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/39c8c979-1e2b-4757-9b14-3526451859e3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.033665 4718 scope.go:117] "RemoveContainer" containerID="2d9a0fa0a267e3b9445c55b61de147cb4077e12a319d79f63c8fb73adcd42b82" Jan 23 16:43:45 crc kubenswrapper[4718]: E0123 16:43:45.034158 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d9a0fa0a267e3b9445c55b61de147cb4077e12a319d79f63c8fb73adcd42b82\": container with ID starting with 2d9a0fa0a267e3b9445c55b61de147cb4077e12a319d79f63c8fb73adcd42b82 not found: ID does not exist" containerID="2d9a0fa0a267e3b9445c55b61de147cb4077e12a319d79f63c8fb73adcd42b82" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.034195 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d9a0fa0a267e3b9445c55b61de147cb4077e12a319d79f63c8fb73adcd42b82"} err="failed to get container status \"2d9a0fa0a267e3b9445c55b61de147cb4077e12a319d79f63c8fb73adcd42b82\": rpc error: code = NotFound desc = could not find container \"2d9a0fa0a267e3b9445c55b61de147cb4077e12a319d79f63c8fb73adcd42b82\": container with ID starting with 2d9a0fa0a267e3b9445c55b61de147cb4077e12a319d79f63c8fb73adcd42b82 not found: ID does not exist" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.034220 4718 scope.go:117] "RemoveContainer" containerID="4fb04f6ffcae1daefee38a46630a8461b943b051c50de755500c97335de1b9db" Jan 23 16:43:45 crc kubenswrapper[4718]: E0123 16:43:45.034645 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fb04f6ffcae1daefee38a46630a8461b943b051c50de755500c97335de1b9db\": container with ID starting with 4fb04f6ffcae1daefee38a46630a8461b943b051c50de755500c97335de1b9db not found: ID does not exist" containerID="4fb04f6ffcae1daefee38a46630a8461b943b051c50de755500c97335de1b9db" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.034670 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fb04f6ffcae1daefee38a46630a8461b943b051c50de755500c97335de1b9db"} err="failed to get container status \"4fb04f6ffcae1daefee38a46630a8461b943b051c50de755500c97335de1b9db\": rpc error: code = NotFound desc = could not find container \"4fb04f6ffcae1daefee38a46630a8461b943b051c50de755500c97335de1b9db\": container with ID starting with 4fb04f6ffcae1daefee38a46630a8461b943b051c50de755500c97335de1b9db not found: ID does not exist" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.194651 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.245449 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.258448 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 23 16:43:45 crc kubenswrapper[4718]: E0123 16:43:45.259431 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c8c979-1e2b-4757-9b14-3526451859e3" containerName="setup-container" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.259463 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c8c979-1e2b-4757-9b14-3526451859e3" containerName="setup-container" Jan 23 16:43:45 crc kubenswrapper[4718]: E0123 16:43:45.259499 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c8c979-1e2b-4757-9b14-3526451859e3" containerName="rabbitmq" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.259507 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c8c979-1e2b-4757-9b14-3526451859e3" containerName="rabbitmq" Jan 23 16:43:45 crc kubenswrapper[4718]: E0123 16:43:45.259524 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0928142a-4c90-4e86-9e29-8e7abb282cf0" containerName="heat-engine" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.259531 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0928142a-4c90-4e86-9e29-8e7abb282cf0" containerName="heat-engine" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.259828 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0928142a-4c90-4e86-9e29-8e7abb282cf0" containerName="heat-engine" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.259861 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c8c979-1e2b-4757-9b14-3526451859e3" containerName="rabbitmq" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.265453 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.277889 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.453741 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6e86bb18-ca80-49f5-9de6-46737ff29374-server-conf\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.454194 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e86bb18-ca80-49f5-9de6-46737ff29374-config-data\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.454290 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6e86bb18-ca80-49f5-9de6-46737ff29374-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.454513 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6e86bb18-ca80-49f5-9de6-46737ff29374-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.454764 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6e86bb18-ca80-49f5-9de6-46737ff29374-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.454866 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6e86bb18-ca80-49f5-9de6-46737ff29374-pod-info\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.454901 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6e86bb18-ca80-49f5-9de6-46737ff29374-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.454941 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6e86bb18-ca80-49f5-9de6-46737ff29374-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.455002 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkgws\" (UniqueName: \"kubernetes.io/projected/6e86bb18-ca80-49f5-9de6-46737ff29374-kube-api-access-rkgws\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.455127 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6e86bb18-ca80-49f5-9de6-46737ff29374-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.455248 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.556825 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e86bb18-ca80-49f5-9de6-46737ff29374-config-data\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.556885 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6e86bb18-ca80-49f5-9de6-46737ff29374-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.556948 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6e86bb18-ca80-49f5-9de6-46737ff29374-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.557006 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6e86bb18-ca80-49f5-9de6-46737ff29374-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.557044 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6e86bb18-ca80-49f5-9de6-46737ff29374-pod-info\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.557061 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6e86bb18-ca80-49f5-9de6-46737ff29374-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.557082 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6e86bb18-ca80-49f5-9de6-46737ff29374-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.557113 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkgws\" (UniqueName: \"kubernetes.io/projected/6e86bb18-ca80-49f5-9de6-46737ff29374-kube-api-access-rkgws\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.557159 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6e86bb18-ca80-49f5-9de6-46737ff29374-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.557198 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.557227 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6e86bb18-ca80-49f5-9de6-46737ff29374-server-conf\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.557928 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6e86bb18-ca80-49f5-9de6-46737ff29374-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.558054 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6e86bb18-ca80-49f5-9de6-46737ff29374-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.558256 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6e86bb18-ca80-49f5-9de6-46737ff29374-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.558441 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e86bb18-ca80-49f5-9de6-46737ff29374-config-data\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.558670 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6e86bb18-ca80-49f5-9de6-46737ff29374-server-conf\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.563305 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6e86bb18-ca80-49f5-9de6-46737ff29374-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.563680 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6e86bb18-ca80-49f5-9de6-46737ff29374-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.564784 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.564824 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a1f1a19402a652047b834d193030e18c391170c2a2a97a761d532da758d1e072/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.564821 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6e86bb18-ca80-49f5-9de6-46737ff29374-pod-info\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.565684 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6e86bb18-ca80-49f5-9de6-46737ff29374-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.579109 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkgws\" (UniqueName: \"kubernetes.io/projected/6e86bb18-ca80-49f5-9de6-46737ff29374-kube-api-access-rkgws\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.643392 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f1010f5-7f59-4be3-9645-d781aa361aa6\") pod \"rabbitmq-server-1\" (UID: \"6e86bb18-ca80-49f5-9de6-46737ff29374\") " pod="openstack/rabbitmq-server-1" Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.818373 4718 generic.go:334] "Generic (PLEG): container finished" podID="e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4" containerID="fa9f9f05c388113db8d8f34bfc6d659790fd3bed01a9cc36ba95a25f4b0f8ce2" exitCode=0 Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.818424 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-5j2qd" event={"ID":"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4","Type":"ContainerDied","Data":"fa9f9f05c388113db8d8f34bfc6d659790fd3bed01a9cc36ba95a25f4b0f8ce2"} Jan 23 16:43:45 crc kubenswrapper[4718]: I0123 16:43:45.899924 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 23 16:43:46 crc kubenswrapper[4718]: I0123 16:43:46.447947 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 23 16:43:46 crc kubenswrapper[4718]: W0123 16:43:46.450903 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e86bb18_ca80_49f5_9de6_46737ff29374.slice/crio-5ea70fa8469f3be70057ea7936dd8f048a62ac617fab04983d28df9fc4495c0f WatchSource:0}: Error finding container 5ea70fa8469f3be70057ea7936dd8f048a62ac617fab04983d28df9fc4495c0f: Status 404 returned error can't find the container with id 5ea70fa8469f3be70057ea7936dd8f048a62ac617fab04983d28df9fc4495c0f Jan 23 16:43:46 crc kubenswrapper[4718]: I0123 16:43:46.845262 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6e86bb18-ca80-49f5-9de6-46737ff29374","Type":"ContainerStarted","Data":"5ea70fa8469f3be70057ea7936dd8f048a62ac617fab04983d28df9fc4495c0f"} Jan 23 16:43:47 crc kubenswrapper[4718]: I0123 16:43:47.161899 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39c8c979-1e2b-4757-9b14-3526451859e3" path="/var/lib/kubelet/pods/39c8c979-1e2b-4757-9b14-3526451859e3/volumes" Jan 23 16:43:47 crc kubenswrapper[4718]: I0123 16:43:47.320880 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-5j2qd" Jan 23 16:43:47 crc kubenswrapper[4718]: I0123 16:43:47.445949 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-combined-ca-bundle\") pod \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\" (UID: \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\") " Jan 23 16:43:47 crc kubenswrapper[4718]: I0123 16:43:47.446017 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-scripts\") pod \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\" (UID: \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\") " Jan 23 16:43:47 crc kubenswrapper[4718]: I0123 16:43:47.446337 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6txn\" (UniqueName: \"kubernetes.io/projected/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-kube-api-access-c6txn\") pod \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\" (UID: \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\") " Jan 23 16:43:47 crc kubenswrapper[4718]: I0123 16:43:47.446367 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-config-data\") pod \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\" (UID: \"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4\") " Jan 23 16:43:47 crc kubenswrapper[4718]: I0123 16:43:47.453657 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-scripts" (OuterVolumeSpecName: "scripts") pod "e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4" (UID: "e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:47 crc kubenswrapper[4718]: I0123 16:43:47.471702 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-kube-api-access-c6txn" (OuterVolumeSpecName: "kube-api-access-c6txn") pod "e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4" (UID: "e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4"). InnerVolumeSpecName "kube-api-access-c6txn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:43:47 crc kubenswrapper[4718]: I0123 16:43:47.483063 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-config-data" (OuterVolumeSpecName: "config-data") pod "e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4" (UID: "e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:47 crc kubenswrapper[4718]: I0123 16:43:47.498382 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4" (UID: "e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:43:47 crc kubenswrapper[4718]: I0123 16:43:47.561209 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6txn\" (UniqueName: \"kubernetes.io/projected/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-kube-api-access-c6txn\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:47 crc kubenswrapper[4718]: I0123 16:43:47.561236 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:47 crc kubenswrapper[4718]: I0123 16:43:47.561247 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:47 crc kubenswrapper[4718]: I0123 16:43:47.561255 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:43:47 crc kubenswrapper[4718]: I0123 16:43:47.872237 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-5j2qd" event={"ID":"e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4","Type":"ContainerDied","Data":"072cf62e49ab232e5f7ca44d30a186b195a931576aa00484ceee5a66bbb750db"} Jan 23 16:43:47 crc kubenswrapper[4718]: I0123 16:43:47.872289 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-5j2qd" Jan 23 16:43:47 crc kubenswrapper[4718]: I0123 16:43:47.872301 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="072cf62e49ab232e5f7ca44d30a186b195a931576aa00484ceee5a66bbb750db" Jan 23 16:43:48 crc kubenswrapper[4718]: I0123 16:43:48.887999 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6e86bb18-ca80-49f5-9de6-46737ff29374","Type":"ContainerStarted","Data":"ac34efea6caa89dc9fab5c176b324c6903d3e77806387f4430666d0f7453af90"} Jan 23 16:43:52 crc kubenswrapper[4718]: I0123 16:43:52.872574 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 23 16:43:52 crc kubenswrapper[4718]: I0123 16:43:52.874163 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerName="aodh-api" containerID="cri-o://6da4c0cf9b5560a7ccabdaa68a0d7705c315361beffd7db3012e18578131118d" gracePeriod=30 Jan 23 16:43:52 crc kubenswrapper[4718]: I0123 16:43:52.875180 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerName="aodh-listener" containerID="cri-o://9212419d2a5665b0c08a71612aed5325028d609b2b90d96e88b95f6b34081d20" gracePeriod=30 Jan 23 16:43:52 crc kubenswrapper[4718]: I0123 16:43:52.875310 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerName="aodh-notifier" containerID="cri-o://047145dc8c282168ddbc9efbf6c6a77dcd2ab64ea2bb104330925f84988605ac" gracePeriod=30 Jan 23 16:43:52 crc kubenswrapper[4718]: I0123 16:43:52.875403 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerName="aodh-evaluator" containerID="cri-o://8686c8b9c38bfc7ca7e06f7d04fa9335664c855a48b2702b71bf9fec552850de" gracePeriod=30 Jan 23 16:43:53 crc kubenswrapper[4718]: I0123 16:43:53.956191 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerID="8686c8b9c38bfc7ca7e06f7d04fa9335664c855a48b2702b71bf9fec552850de" exitCode=0 Jan 23 16:43:53 crc kubenswrapper[4718]: I0123 16:43:53.956714 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerID="6da4c0cf9b5560a7ccabdaa68a0d7705c315361beffd7db3012e18578131118d" exitCode=0 Jan 23 16:43:53 crc kubenswrapper[4718]: I0123 16:43:53.956274 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c3b4f02b-601b-4c1d-a9df-a488ce538760","Type":"ContainerDied","Data":"8686c8b9c38bfc7ca7e06f7d04fa9335664c855a48b2702b71bf9fec552850de"} Jan 23 16:43:53 crc kubenswrapper[4718]: I0123 16:43:53.956767 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c3b4f02b-601b-4c1d-a9df-a488ce538760","Type":"ContainerDied","Data":"6da4c0cf9b5560a7ccabdaa68a0d7705c315361beffd7db3012e18578131118d"} Jan 23 16:43:55 crc kubenswrapper[4718]: I0123 16:43:55.981951 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerID="9212419d2a5665b0c08a71612aed5325028d609b2b90d96e88b95f6b34081d20" exitCode=0 Jan 23 16:43:55 crc kubenswrapper[4718]: I0123 16:43:55.982017 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c3b4f02b-601b-4c1d-a9df-a488ce538760","Type":"ContainerDied","Data":"9212419d2a5665b0c08a71612aed5325028d609b2b90d96e88b95f6b34081d20"} Jan 23 16:43:56 crc kubenswrapper[4718]: I0123 16:43:56.629514 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 16:43:56 crc kubenswrapper[4718]: I0123 16:43:56.995770 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" event={"ID":"7d9a564a-3bb5-421a-a861-721b16ae1adc","Type":"ContainerStarted","Data":"086199cc95fcc4795607d9e057060d788fce0e0e76972c667d72343d0b03b7a5"} Jan 23 16:43:57 crc kubenswrapper[4718]: I0123 16:43:57.019065 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" podStartSLOduration=2.303268607 podStartE2EDuration="33.019045717s" podCreationTimestamp="2026-01-23 16:43:24 +0000 UTC" firstStartedPulling="2026-01-23 16:43:25.910836093 +0000 UTC m=+1607.058078084" lastFinishedPulling="2026-01-23 16:43:56.626613203 +0000 UTC m=+1637.773855194" observedRunningTime="2026-01-23 16:43:57.012977535 +0000 UTC m=+1638.160219526" watchObservedRunningTime="2026-01-23 16:43:57.019045717 +0000 UTC m=+1638.166287708" Jan 23 16:43:57 crc kubenswrapper[4718]: I0123 16:43:57.466697 4718 scope.go:117] "RemoveContainer" containerID="809fe8a28e6ad8ef22ff17b1650f5d8ae1d7cdf418842346d4d245a41ba2ffc4" Jan 23 16:43:57 crc kubenswrapper[4718]: I0123 16:43:57.501122 4718 scope.go:117] "RemoveContainer" containerID="ab09c8bb2b1ab814b21185f83d61eb76e4b8bd0e37fb09f239e6729b3f99f6df" Jan 23 16:43:57 crc kubenswrapper[4718]: I0123 16:43:57.541480 4718 scope.go:117] "RemoveContainer" containerID="c7d9778bd38f9b1dd30e6e5fc91959033ff5becb7bacc4bbe4a8f9af2ae57aa7" Jan 23 16:43:57 crc kubenswrapper[4718]: I0123 16:43:57.572664 4718 scope.go:117] "RemoveContainer" containerID="0f58d8ab7425a08edd31c56f4cde058daa496f5badbf94fa61c6ffa9300a8e84" Jan 23 16:43:57 crc kubenswrapper[4718]: I0123 16:43:57.611025 4718 scope.go:117] "RemoveContainer" containerID="1ae7968bf6684799caaebf54ffd31d959eaab938e800a2b656112bff7c44ce7b" Jan 23 16:43:57 crc kubenswrapper[4718]: I0123 16:43:57.644511 4718 scope.go:117] "RemoveContainer" containerID="78dc59f3476eb5ad02c7177482d05b22cac323ea057d4ae6ac6d9e6821bbfc07" Jan 23 16:43:57 crc kubenswrapper[4718]: I0123 16:43:57.679342 4718 scope.go:117] "RemoveContainer" containerID="2b16a67beadd9baf553b4ac099ee87ab37142c4add0e1e4b54f77d1d0de95131" Jan 23 16:43:57 crc kubenswrapper[4718]: I0123 16:43:57.756775 4718 scope.go:117] "RemoveContainer" containerID="018925fd1e0271fed1d0e3af3b08c0304283b716135848e08fdcb4319f80ddd2" Jan 23 16:43:57 crc kubenswrapper[4718]: I0123 16:43:57.804804 4718 scope.go:117] "RemoveContainer" containerID="2db9cf20fa6b20c906e87b700696cf5dc6e88650dd35a9572ed443c563377bed" Jan 23 16:43:57 crc kubenswrapper[4718]: I0123 16:43:57.845460 4718 scope.go:117] "RemoveContainer" containerID="fe07bc6827bb0eb6fc80607def05b3a8260ef58ca7b9fcd88a659999a2fc20e5" Jan 23 16:43:57 crc kubenswrapper[4718]: I0123 16:43:57.901591 4718 scope.go:117] "RemoveContainer" containerID="369f7da4497093c2d3f133230f8d8a1cb01e50079afd3bf31dc655587357b169" Jan 23 16:43:58 crc kubenswrapper[4718]: I0123 16:43:58.875964 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:43:58 crc kubenswrapper[4718]: I0123 16:43:58.876450 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.581572 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.621038 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-public-tls-certs\") pod \"c3b4f02b-601b-4c1d-a9df-a488ce538760\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.621093 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-combined-ca-bundle\") pod \"c3b4f02b-601b-4c1d-a9df-a488ce538760\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.621216 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-config-data\") pod \"c3b4f02b-601b-4c1d-a9df-a488ce538760\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.621235 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvht4\" (UniqueName: \"kubernetes.io/projected/c3b4f02b-601b-4c1d-a9df-a488ce538760-kube-api-access-kvht4\") pod \"c3b4f02b-601b-4c1d-a9df-a488ce538760\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.621275 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-internal-tls-certs\") pod \"c3b4f02b-601b-4c1d-a9df-a488ce538760\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.621327 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-scripts\") pod \"c3b4f02b-601b-4c1d-a9df-a488ce538760\" (UID: \"c3b4f02b-601b-4c1d-a9df-a488ce538760\") " Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.630588 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-scripts" (OuterVolumeSpecName: "scripts") pod "c3b4f02b-601b-4c1d-a9df-a488ce538760" (UID: "c3b4f02b-601b-4c1d-a9df-a488ce538760"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.635067 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b4f02b-601b-4c1d-a9df-a488ce538760-kube-api-access-kvht4" (OuterVolumeSpecName: "kube-api-access-kvht4") pod "c3b4f02b-601b-4c1d-a9df-a488ce538760" (UID: "c3b4f02b-601b-4c1d-a9df-a488ce538760"). InnerVolumeSpecName "kube-api-access-kvht4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.727332 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvht4\" (UniqueName: \"kubernetes.io/projected/c3b4f02b-601b-4c1d-a9df-a488ce538760-kube-api-access-kvht4\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.727386 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.734280 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c3b4f02b-601b-4c1d-a9df-a488ce538760" (UID: "c3b4f02b-601b-4c1d-a9df-a488ce538760"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.767851 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c3b4f02b-601b-4c1d-a9df-a488ce538760" (UID: "c3b4f02b-601b-4c1d-a9df-a488ce538760"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.806309 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-config-data" (OuterVolumeSpecName: "config-data") pod "c3b4f02b-601b-4c1d-a9df-a488ce538760" (UID: "c3b4f02b-601b-4c1d-a9df-a488ce538760"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.815451 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3b4f02b-601b-4c1d-a9df-a488ce538760" (UID: "c3b4f02b-601b-4c1d-a9df-a488ce538760"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.829980 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.830021 4718 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.830031 4718 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:00 crc kubenswrapper[4718]: I0123 16:44:00.830042 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b4f02b-601b-4c1d-a9df-a488ce538760-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.076791 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerID="047145dc8c282168ddbc9efbf6c6a77dcd2ab64ea2bb104330925f84988605ac" exitCode=0 Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.076832 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c3b4f02b-601b-4c1d-a9df-a488ce538760","Type":"ContainerDied","Data":"047145dc8c282168ddbc9efbf6c6a77dcd2ab64ea2bb104330925f84988605ac"} Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.076881 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c3b4f02b-601b-4c1d-a9df-a488ce538760","Type":"ContainerDied","Data":"1ab3704f92887e05a58fd0d3007ea0f2d74d0757d977ec0f1a0e37afbf4a7040"} Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.076911 4718 scope.go:117] "RemoveContainer" containerID="9212419d2a5665b0c08a71612aed5325028d609b2b90d96e88b95f6b34081d20" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.076912 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.126416 4718 scope.go:117] "RemoveContainer" containerID="047145dc8c282168ddbc9efbf6c6a77dcd2ab64ea2bb104330925f84988605ac" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.190477 4718 scope.go:117] "RemoveContainer" containerID="8686c8b9c38bfc7ca7e06f7d04fa9335664c855a48b2702b71bf9fec552850de" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.217626 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.226920 4718 scope.go:117] "RemoveContainer" containerID="6da4c0cf9b5560a7ccabdaa68a0d7705c315361beffd7db3012e18578131118d" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.235963 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.254011 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 23 16:44:01 crc kubenswrapper[4718]: E0123 16:44:01.254614 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerName="aodh-api" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.254646 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerName="aodh-api" Jan 23 16:44:01 crc kubenswrapper[4718]: E0123 16:44:01.254679 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4" containerName="aodh-db-sync" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.254685 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4" containerName="aodh-db-sync" Jan 23 16:44:01 crc kubenswrapper[4718]: E0123 16:44:01.254695 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerName="aodh-evaluator" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.254701 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerName="aodh-evaluator" Jan 23 16:44:01 crc kubenswrapper[4718]: E0123 16:44:01.254722 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerName="aodh-notifier" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.254728 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerName="aodh-notifier" Jan 23 16:44:01 crc kubenswrapper[4718]: E0123 16:44:01.254735 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerName="aodh-listener" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.254740 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerName="aodh-listener" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.254944 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4" containerName="aodh-db-sync" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.254961 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerName="aodh-listener" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.254973 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerName="aodh-api" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.254985 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerName="aodh-evaluator" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.255000 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b4f02b-601b-4c1d-a9df-a488ce538760" containerName="aodh-notifier" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.257159 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.260913 4718 scope.go:117] "RemoveContainer" containerID="9212419d2a5665b0c08a71612aed5325028d609b2b90d96e88b95f6b34081d20" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.261279 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.261359 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.261497 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.261574 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-tjnkn" Jan 23 16:44:01 crc kubenswrapper[4718]: E0123 16:44:01.261613 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9212419d2a5665b0c08a71612aed5325028d609b2b90d96e88b95f6b34081d20\": container with ID starting with 9212419d2a5665b0c08a71612aed5325028d609b2b90d96e88b95f6b34081d20 not found: ID does not exist" containerID="9212419d2a5665b0c08a71612aed5325028d609b2b90d96e88b95f6b34081d20" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.261668 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.261682 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9212419d2a5665b0c08a71612aed5325028d609b2b90d96e88b95f6b34081d20"} err="failed to get container status \"9212419d2a5665b0c08a71612aed5325028d609b2b90d96e88b95f6b34081d20\": rpc error: code = NotFound desc = could not find container \"9212419d2a5665b0c08a71612aed5325028d609b2b90d96e88b95f6b34081d20\": container with ID starting with 9212419d2a5665b0c08a71612aed5325028d609b2b90d96e88b95f6b34081d20 not found: ID does not exist" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.261716 4718 scope.go:117] "RemoveContainer" containerID="047145dc8c282168ddbc9efbf6c6a77dcd2ab64ea2bb104330925f84988605ac" Jan 23 16:44:01 crc kubenswrapper[4718]: E0123 16:44:01.265224 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"047145dc8c282168ddbc9efbf6c6a77dcd2ab64ea2bb104330925f84988605ac\": container with ID starting with 047145dc8c282168ddbc9efbf6c6a77dcd2ab64ea2bb104330925f84988605ac not found: ID does not exist" containerID="047145dc8c282168ddbc9efbf6c6a77dcd2ab64ea2bb104330925f84988605ac" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.265294 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047145dc8c282168ddbc9efbf6c6a77dcd2ab64ea2bb104330925f84988605ac"} err="failed to get container status \"047145dc8c282168ddbc9efbf6c6a77dcd2ab64ea2bb104330925f84988605ac\": rpc error: code = NotFound desc = could not find container \"047145dc8c282168ddbc9efbf6c6a77dcd2ab64ea2bb104330925f84988605ac\": container with ID starting with 047145dc8c282168ddbc9efbf6c6a77dcd2ab64ea2bb104330925f84988605ac not found: ID does not exist" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.265340 4718 scope.go:117] "RemoveContainer" containerID="8686c8b9c38bfc7ca7e06f7d04fa9335664c855a48b2702b71bf9fec552850de" Jan 23 16:44:01 crc kubenswrapper[4718]: E0123 16:44:01.265779 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8686c8b9c38bfc7ca7e06f7d04fa9335664c855a48b2702b71bf9fec552850de\": container with ID starting with 8686c8b9c38bfc7ca7e06f7d04fa9335664c855a48b2702b71bf9fec552850de not found: ID does not exist" containerID="8686c8b9c38bfc7ca7e06f7d04fa9335664c855a48b2702b71bf9fec552850de" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.265816 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8686c8b9c38bfc7ca7e06f7d04fa9335664c855a48b2702b71bf9fec552850de"} err="failed to get container status \"8686c8b9c38bfc7ca7e06f7d04fa9335664c855a48b2702b71bf9fec552850de\": rpc error: code = NotFound desc = could not find container \"8686c8b9c38bfc7ca7e06f7d04fa9335664c855a48b2702b71bf9fec552850de\": container with ID starting with 8686c8b9c38bfc7ca7e06f7d04fa9335664c855a48b2702b71bf9fec552850de not found: ID does not exist" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.265839 4718 scope.go:117] "RemoveContainer" containerID="6da4c0cf9b5560a7ccabdaa68a0d7705c315361beffd7db3012e18578131118d" Jan 23 16:44:01 crc kubenswrapper[4718]: E0123 16:44:01.266409 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6da4c0cf9b5560a7ccabdaa68a0d7705c315361beffd7db3012e18578131118d\": container with ID starting with 6da4c0cf9b5560a7ccabdaa68a0d7705c315361beffd7db3012e18578131118d not found: ID does not exist" containerID="6da4c0cf9b5560a7ccabdaa68a0d7705c315361beffd7db3012e18578131118d" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.266456 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6da4c0cf9b5560a7ccabdaa68a0d7705c315361beffd7db3012e18578131118d"} err="failed to get container status \"6da4c0cf9b5560a7ccabdaa68a0d7705c315361beffd7db3012e18578131118d\": rpc error: code = NotFound desc = could not find container \"6da4c0cf9b5560a7ccabdaa68a0d7705c315361beffd7db3012e18578131118d\": container with ID starting with 6da4c0cf9b5560a7ccabdaa68a0d7705c315361beffd7db3012e18578131118d not found: ID does not exist" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.268130 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.346293 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9c60ee-d51d-4435-889b-870662f44dd6-combined-ca-bundle\") pod \"aodh-0\" (UID: \"cd9c60ee-d51d-4435-889b-870662f44dd6\") " pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.347750 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmhdr\" (UniqueName: \"kubernetes.io/projected/cd9c60ee-d51d-4435-889b-870662f44dd6-kube-api-access-wmhdr\") pod \"aodh-0\" (UID: \"cd9c60ee-d51d-4435-889b-870662f44dd6\") " pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.348105 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd9c60ee-d51d-4435-889b-870662f44dd6-internal-tls-certs\") pod \"aodh-0\" (UID: \"cd9c60ee-d51d-4435-889b-870662f44dd6\") " pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.348275 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9c60ee-d51d-4435-889b-870662f44dd6-config-data\") pod \"aodh-0\" (UID: \"cd9c60ee-d51d-4435-889b-870662f44dd6\") " pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.348593 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd9c60ee-d51d-4435-889b-870662f44dd6-public-tls-certs\") pod \"aodh-0\" (UID: \"cd9c60ee-d51d-4435-889b-870662f44dd6\") " pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.349000 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9c60ee-d51d-4435-889b-870662f44dd6-scripts\") pod \"aodh-0\" (UID: \"cd9c60ee-d51d-4435-889b-870662f44dd6\") " pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.473526 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9c60ee-d51d-4435-889b-870662f44dd6-combined-ca-bundle\") pod \"aodh-0\" (UID: \"cd9c60ee-d51d-4435-889b-870662f44dd6\") " pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.473580 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmhdr\" (UniqueName: \"kubernetes.io/projected/cd9c60ee-d51d-4435-889b-870662f44dd6-kube-api-access-wmhdr\") pod \"aodh-0\" (UID: \"cd9c60ee-d51d-4435-889b-870662f44dd6\") " pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.473679 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd9c60ee-d51d-4435-889b-870662f44dd6-internal-tls-certs\") pod \"aodh-0\" (UID: \"cd9c60ee-d51d-4435-889b-870662f44dd6\") " pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.473710 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9c60ee-d51d-4435-889b-870662f44dd6-config-data\") pod \"aodh-0\" (UID: \"cd9c60ee-d51d-4435-889b-870662f44dd6\") " pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.473759 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd9c60ee-d51d-4435-889b-870662f44dd6-public-tls-certs\") pod \"aodh-0\" (UID: \"cd9c60ee-d51d-4435-889b-870662f44dd6\") " pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.474322 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9c60ee-d51d-4435-889b-870662f44dd6-scripts\") pod \"aodh-0\" (UID: \"cd9c60ee-d51d-4435-889b-870662f44dd6\") " pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.477822 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd9c60ee-d51d-4435-889b-870662f44dd6-public-tls-certs\") pod \"aodh-0\" (UID: \"cd9c60ee-d51d-4435-889b-870662f44dd6\") " pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.477958 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9c60ee-d51d-4435-889b-870662f44dd6-combined-ca-bundle\") pod \"aodh-0\" (UID: \"cd9c60ee-d51d-4435-889b-870662f44dd6\") " pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.478475 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9c60ee-d51d-4435-889b-870662f44dd6-config-data\") pod \"aodh-0\" (UID: \"cd9c60ee-d51d-4435-889b-870662f44dd6\") " pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.478906 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9c60ee-d51d-4435-889b-870662f44dd6-scripts\") pod \"aodh-0\" (UID: \"cd9c60ee-d51d-4435-889b-870662f44dd6\") " pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.484182 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd9c60ee-d51d-4435-889b-870662f44dd6-internal-tls-certs\") pod \"aodh-0\" (UID: \"cd9c60ee-d51d-4435-889b-870662f44dd6\") " pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.490818 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmhdr\" (UniqueName: \"kubernetes.io/projected/cd9c60ee-d51d-4435-889b-870662f44dd6-kube-api-access-wmhdr\") pod \"aodh-0\" (UID: \"cd9c60ee-d51d-4435-889b-870662f44dd6\") " pod="openstack/aodh-0" Jan 23 16:44:01 crc kubenswrapper[4718]: I0123 16:44:01.594349 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 23 16:44:02 crc kubenswrapper[4718]: W0123 16:44:02.145947 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd9c60ee_d51d_4435_889b_870662f44dd6.slice/crio-936353fc0c5731b3490debf00218b211724c815db5d22be9187c99320c7364d1 WatchSource:0}: Error finding container 936353fc0c5731b3490debf00218b211724c815db5d22be9187c99320c7364d1: Status 404 returned error can't find the container with id 936353fc0c5731b3490debf00218b211724c815db5d22be9187c99320c7364d1 Jan 23 16:44:02 crc kubenswrapper[4718]: I0123 16:44:02.146791 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 23 16:44:03 crc kubenswrapper[4718]: I0123 16:44:03.109021 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"cd9c60ee-d51d-4435-889b-870662f44dd6","Type":"ContainerStarted","Data":"936353fc0c5731b3490debf00218b211724c815db5d22be9187c99320c7364d1"} Jan 23 16:44:03 crc kubenswrapper[4718]: I0123 16:44:03.153265 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b4f02b-601b-4c1d-a9df-a488ce538760" path="/var/lib/kubelet/pods/c3b4f02b-601b-4c1d-a9df-a488ce538760/volumes" Jan 23 16:44:04 crc kubenswrapper[4718]: I0123 16:44:04.125193 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"cd9c60ee-d51d-4435-889b-870662f44dd6","Type":"ContainerStarted","Data":"f581024096ae3cc84bd75e2ce7647303fc9ccdb74aedd3cfbfbc0e250e90ff93"} Jan 23 16:44:05 crc kubenswrapper[4718]: I0123 16:44:05.137111 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"cd9c60ee-d51d-4435-889b-870662f44dd6","Type":"ContainerStarted","Data":"bfb9e5c7ad111ae63f81d02f22bcf5af3d111a7efc1aeb0719fcf2e77f6bfcd7"} Jan 23 16:44:06 crc kubenswrapper[4718]: I0123 16:44:06.156060 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"cd9c60ee-d51d-4435-889b-870662f44dd6","Type":"ContainerStarted","Data":"405d18027fb4dd43bd7590f3ec3cf97e2870011d5edba43817a9e9c093ebcd7e"} Jan 23 16:44:07 crc kubenswrapper[4718]: I0123 16:44:07.172391 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"cd9c60ee-d51d-4435-889b-870662f44dd6","Type":"ContainerStarted","Data":"2e60c7dfba58f75db72fc3ec964d7ab5d1017edc7d8ab7f2ae0b444e6849050d"} Jan 23 16:44:07 crc kubenswrapper[4718]: I0123 16:44:07.200433 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.672141741 podStartE2EDuration="6.200410967s" podCreationTimestamp="2026-01-23 16:44:01 +0000 UTC" firstStartedPulling="2026-01-23 16:44:02.153771202 +0000 UTC m=+1643.301013193" lastFinishedPulling="2026-01-23 16:44:06.682040428 +0000 UTC m=+1647.829282419" observedRunningTime="2026-01-23 16:44:07.192096514 +0000 UTC m=+1648.339338505" watchObservedRunningTime="2026-01-23 16:44:07.200410967 +0000 UTC m=+1648.347652958" Jan 23 16:44:08 crc kubenswrapper[4718]: I0123 16:44:08.184840 4718 generic.go:334] "Generic (PLEG): container finished" podID="7d9a564a-3bb5-421a-a861-721b16ae1adc" containerID="086199cc95fcc4795607d9e057060d788fce0e0e76972c667d72343d0b03b7a5" exitCode=0 Jan 23 16:44:08 crc kubenswrapper[4718]: I0123 16:44:08.184924 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" event={"ID":"7d9a564a-3bb5-421a-a861-721b16ae1adc","Type":"ContainerDied","Data":"086199cc95fcc4795607d9e057060d788fce0e0e76972c667d72343d0b03b7a5"} Jan 23 16:44:09 crc kubenswrapper[4718]: I0123 16:44:09.946324 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.026835 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgfj6\" (UniqueName: \"kubernetes.io/projected/7d9a564a-3bb5-421a-a861-721b16ae1adc-kube-api-access-rgfj6\") pod \"7d9a564a-3bb5-421a-a861-721b16ae1adc\" (UID: \"7d9a564a-3bb5-421a-a861-721b16ae1adc\") " Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.026964 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d9a564a-3bb5-421a-a861-721b16ae1adc-repo-setup-combined-ca-bundle\") pod \"7d9a564a-3bb5-421a-a861-721b16ae1adc\" (UID: \"7d9a564a-3bb5-421a-a861-721b16ae1adc\") " Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.027106 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d9a564a-3bb5-421a-a861-721b16ae1adc-ssh-key-openstack-edpm-ipam\") pod \"7d9a564a-3bb5-421a-a861-721b16ae1adc\" (UID: \"7d9a564a-3bb5-421a-a861-721b16ae1adc\") " Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.027252 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d9a564a-3bb5-421a-a861-721b16ae1adc-inventory\") pod \"7d9a564a-3bb5-421a-a861-721b16ae1adc\" (UID: \"7d9a564a-3bb5-421a-a861-721b16ae1adc\") " Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.036770 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d9a564a-3bb5-421a-a861-721b16ae1adc-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "7d9a564a-3bb5-421a-a861-721b16ae1adc" (UID: "7d9a564a-3bb5-421a-a861-721b16ae1adc"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.036870 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d9a564a-3bb5-421a-a861-721b16ae1adc-kube-api-access-rgfj6" (OuterVolumeSpecName: "kube-api-access-rgfj6") pod "7d9a564a-3bb5-421a-a861-721b16ae1adc" (UID: "7d9a564a-3bb5-421a-a861-721b16ae1adc"). InnerVolumeSpecName "kube-api-access-rgfj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.072006 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d9a564a-3bb5-421a-a861-721b16ae1adc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7d9a564a-3bb5-421a-a861-721b16ae1adc" (UID: "7d9a564a-3bb5-421a-a861-721b16ae1adc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.078308 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d9a564a-3bb5-421a-a861-721b16ae1adc-inventory" (OuterVolumeSpecName: "inventory") pod "7d9a564a-3bb5-421a-a861-721b16ae1adc" (UID: "7d9a564a-3bb5-421a-a861-721b16ae1adc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.131172 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgfj6\" (UniqueName: \"kubernetes.io/projected/7d9a564a-3bb5-421a-a861-721b16ae1adc-kube-api-access-rgfj6\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.131215 4718 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d9a564a-3bb5-421a-a861-721b16ae1adc-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.131229 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d9a564a-3bb5-421a-a861-721b16ae1adc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.131246 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d9a564a-3bb5-421a-a861-721b16ae1adc-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.219298 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" event={"ID":"7d9a564a-3bb5-421a-a861-721b16ae1adc","Type":"ContainerDied","Data":"f4538f9ba535dec6121a816c12870008030d378a0fefbcf16e6371b0f776d57c"} Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.219347 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4538f9ba535dec6121a816c12870008030d378a0fefbcf16e6371b0f776d57c" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.219412 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n777k" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.320215 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92"] Jan 23 16:44:10 crc kubenswrapper[4718]: E0123 16:44:10.321142 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9a564a-3bb5-421a-a861-721b16ae1adc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.321166 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9a564a-3bb5-421a-a861-721b16ae1adc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.321522 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d9a564a-3bb5-421a-a861-721b16ae1adc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.330743 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.333963 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92"] Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.334978 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.335112 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.335197 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.335333 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.437225 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d95287f3-d510-4991-bde5-94259e7c64d4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xcm92\" (UID: \"d95287f3-d510-4991-bde5-94259e7c64d4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.437679 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8wmh\" (UniqueName: \"kubernetes.io/projected/d95287f3-d510-4991-bde5-94259e7c64d4-kube-api-access-f8wmh\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xcm92\" (UID: \"d95287f3-d510-4991-bde5-94259e7c64d4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.438075 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d95287f3-d510-4991-bde5-94259e7c64d4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xcm92\" (UID: \"d95287f3-d510-4991-bde5-94259e7c64d4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.540809 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8wmh\" (UniqueName: \"kubernetes.io/projected/d95287f3-d510-4991-bde5-94259e7c64d4-kube-api-access-f8wmh\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xcm92\" (UID: \"d95287f3-d510-4991-bde5-94259e7c64d4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.540961 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d95287f3-d510-4991-bde5-94259e7c64d4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xcm92\" (UID: \"d95287f3-d510-4991-bde5-94259e7c64d4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.541102 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d95287f3-d510-4991-bde5-94259e7c64d4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xcm92\" (UID: \"d95287f3-d510-4991-bde5-94259e7c64d4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.545493 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d95287f3-d510-4991-bde5-94259e7c64d4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xcm92\" (UID: \"d95287f3-d510-4991-bde5-94259e7c64d4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.545882 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d95287f3-d510-4991-bde5-94259e7c64d4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xcm92\" (UID: \"d95287f3-d510-4991-bde5-94259e7c64d4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.560265 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8wmh\" (UniqueName: \"kubernetes.io/projected/d95287f3-d510-4991-bde5-94259e7c64d4-kube-api-access-f8wmh\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xcm92\" (UID: \"d95287f3-d510-4991-bde5-94259e7c64d4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92" Jan 23 16:44:10 crc kubenswrapper[4718]: I0123 16:44:10.663891 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92" Jan 23 16:44:11 crc kubenswrapper[4718]: W0123 16:44:11.264036 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd95287f3_d510_4991_bde5_94259e7c64d4.slice/crio-48381183a6ce0214c1705150b02ada2ad17e790655c282877f7566696d380443 WatchSource:0}: Error finding container 48381183a6ce0214c1705150b02ada2ad17e790655c282877f7566696d380443: Status 404 returned error can't find the container with id 48381183a6ce0214c1705150b02ada2ad17e790655c282877f7566696d380443 Jan 23 16:44:11 crc kubenswrapper[4718]: I0123 16:44:11.271722 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92"] Jan 23 16:44:12 crc kubenswrapper[4718]: I0123 16:44:12.245426 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92" event={"ID":"d95287f3-d510-4991-bde5-94259e7c64d4","Type":"ContainerStarted","Data":"51f4508f63224c9dcff2d91c421b6e8242cec7669fa840c2f1273ab983d8ec61"} Jan 23 16:44:12 crc kubenswrapper[4718]: I0123 16:44:12.246537 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92" event={"ID":"d95287f3-d510-4991-bde5-94259e7c64d4","Type":"ContainerStarted","Data":"48381183a6ce0214c1705150b02ada2ad17e790655c282877f7566696d380443"} Jan 23 16:44:12 crc kubenswrapper[4718]: I0123 16:44:12.271931 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92" podStartSLOduration=1.806733796 podStartE2EDuration="2.271912419s" podCreationTimestamp="2026-01-23 16:44:10 +0000 UTC" firstStartedPulling="2026-01-23 16:44:11.267653861 +0000 UTC m=+1652.414895852" lastFinishedPulling="2026-01-23 16:44:11.732832444 +0000 UTC m=+1652.880074475" observedRunningTime="2026-01-23 16:44:12.260840812 +0000 UTC m=+1653.408082843" watchObservedRunningTime="2026-01-23 16:44:12.271912419 +0000 UTC m=+1653.419154400" Jan 23 16:44:15 crc kubenswrapper[4718]: I0123 16:44:15.310913 4718 generic.go:334] "Generic (PLEG): container finished" podID="d95287f3-d510-4991-bde5-94259e7c64d4" containerID="51f4508f63224c9dcff2d91c421b6e8242cec7669fa840c2f1273ab983d8ec61" exitCode=0 Jan 23 16:44:15 crc kubenswrapper[4718]: I0123 16:44:15.311080 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92" event={"ID":"d95287f3-d510-4991-bde5-94259e7c64d4","Type":"ContainerDied","Data":"51f4508f63224c9dcff2d91c421b6e8242cec7669fa840c2f1273ab983d8ec61"} Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.016814 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.137558 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8wmh\" (UniqueName: \"kubernetes.io/projected/d95287f3-d510-4991-bde5-94259e7c64d4-kube-api-access-f8wmh\") pod \"d95287f3-d510-4991-bde5-94259e7c64d4\" (UID: \"d95287f3-d510-4991-bde5-94259e7c64d4\") " Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.137924 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d95287f3-d510-4991-bde5-94259e7c64d4-ssh-key-openstack-edpm-ipam\") pod \"d95287f3-d510-4991-bde5-94259e7c64d4\" (UID: \"d95287f3-d510-4991-bde5-94259e7c64d4\") " Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.138035 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d95287f3-d510-4991-bde5-94259e7c64d4-inventory\") pod \"d95287f3-d510-4991-bde5-94259e7c64d4\" (UID: \"d95287f3-d510-4991-bde5-94259e7c64d4\") " Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.145194 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d95287f3-d510-4991-bde5-94259e7c64d4-kube-api-access-f8wmh" (OuterVolumeSpecName: "kube-api-access-f8wmh") pod "d95287f3-d510-4991-bde5-94259e7c64d4" (UID: "d95287f3-d510-4991-bde5-94259e7c64d4"). InnerVolumeSpecName "kube-api-access-f8wmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.181642 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d95287f3-d510-4991-bde5-94259e7c64d4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d95287f3-d510-4991-bde5-94259e7c64d4" (UID: "d95287f3-d510-4991-bde5-94259e7c64d4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.188918 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d95287f3-d510-4991-bde5-94259e7c64d4-inventory" (OuterVolumeSpecName: "inventory") pod "d95287f3-d510-4991-bde5-94259e7c64d4" (UID: "d95287f3-d510-4991-bde5-94259e7c64d4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.242025 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8wmh\" (UniqueName: \"kubernetes.io/projected/d95287f3-d510-4991-bde5-94259e7c64d4-kube-api-access-f8wmh\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.242067 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d95287f3-d510-4991-bde5-94259e7c64d4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.242081 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d95287f3-d510-4991-bde5-94259e7c64d4-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.335670 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92" event={"ID":"d95287f3-d510-4991-bde5-94259e7c64d4","Type":"ContainerDied","Data":"48381183a6ce0214c1705150b02ada2ad17e790655c282877f7566696d380443"} Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.335711 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xcm92" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.335726 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48381183a6ce0214c1705150b02ada2ad17e790655c282877f7566696d380443" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.431842 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8"] Jan 23 16:44:17 crc kubenswrapper[4718]: E0123 16:44:17.432347 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95287f3-d510-4991-bde5-94259e7c64d4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.432368 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95287f3-d510-4991-bde5-94259e7c64d4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.432601 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d95287f3-d510-4991-bde5-94259e7c64d4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.433375 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.437762 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.437803 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.438249 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.438305 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.445602 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8"] Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.548947 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d78674b0-cdd9-4a34-a2d0-b9eece735396-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8\" (UID: \"d78674b0-cdd9-4a34-a2d0-b9eece735396\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.549017 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svwd6\" (UniqueName: \"kubernetes.io/projected/d78674b0-cdd9-4a34-a2d0-b9eece735396-kube-api-access-svwd6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8\" (UID: \"d78674b0-cdd9-4a34-a2d0-b9eece735396\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.549072 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d78674b0-cdd9-4a34-a2d0-b9eece735396-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8\" (UID: \"d78674b0-cdd9-4a34-a2d0-b9eece735396\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.549115 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78674b0-cdd9-4a34-a2d0-b9eece735396-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8\" (UID: \"d78674b0-cdd9-4a34-a2d0-b9eece735396\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.651232 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d78674b0-cdd9-4a34-a2d0-b9eece735396-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8\" (UID: \"d78674b0-cdd9-4a34-a2d0-b9eece735396\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.651317 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svwd6\" (UniqueName: \"kubernetes.io/projected/d78674b0-cdd9-4a34-a2d0-b9eece735396-kube-api-access-svwd6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8\" (UID: \"d78674b0-cdd9-4a34-a2d0-b9eece735396\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.651379 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d78674b0-cdd9-4a34-a2d0-b9eece735396-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8\" (UID: \"d78674b0-cdd9-4a34-a2d0-b9eece735396\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.651431 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78674b0-cdd9-4a34-a2d0-b9eece735396-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8\" (UID: \"d78674b0-cdd9-4a34-a2d0-b9eece735396\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.656865 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d78674b0-cdd9-4a34-a2d0-b9eece735396-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8\" (UID: \"d78674b0-cdd9-4a34-a2d0-b9eece735396\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.656936 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d78674b0-cdd9-4a34-a2d0-b9eece735396-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8\" (UID: \"d78674b0-cdd9-4a34-a2d0-b9eece735396\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.657919 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78674b0-cdd9-4a34-a2d0-b9eece735396-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8\" (UID: \"d78674b0-cdd9-4a34-a2d0-b9eece735396\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.670758 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svwd6\" (UniqueName: \"kubernetes.io/projected/d78674b0-cdd9-4a34-a2d0-b9eece735396-kube-api-access-svwd6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8\" (UID: \"d78674b0-cdd9-4a34-a2d0-b9eece735396\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" Jan 23 16:44:17 crc kubenswrapper[4718]: I0123 16:44:17.765120 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" Jan 23 16:44:18 crc kubenswrapper[4718]: I0123 16:44:18.317726 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8"] Jan 23 16:44:18 crc kubenswrapper[4718]: I0123 16:44:18.352240 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" event={"ID":"d78674b0-cdd9-4a34-a2d0-b9eece735396","Type":"ContainerStarted","Data":"fd38d8f54c0d40bd67fe1bd8937a60dc00ba16f5b1d4fbb872eb322012c0fa65"} Jan 23 16:44:19 crc kubenswrapper[4718]: I0123 16:44:19.377102 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" event={"ID":"d78674b0-cdd9-4a34-a2d0-b9eece735396","Type":"ContainerStarted","Data":"f38a58d69380493c330f20e0fd710e0a1ed05263931ec6a181e8415ebd27a329"} Jan 23 16:44:19 crc kubenswrapper[4718]: I0123 16:44:19.417280 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" podStartSLOduration=1.897000206 podStartE2EDuration="2.417262744s" podCreationTimestamp="2026-01-23 16:44:17 +0000 UTC" firstStartedPulling="2026-01-23 16:44:18.322802691 +0000 UTC m=+1659.470044682" lastFinishedPulling="2026-01-23 16:44:18.843065229 +0000 UTC m=+1659.990307220" observedRunningTime="2026-01-23 16:44:19.402384776 +0000 UTC m=+1660.549626767" watchObservedRunningTime="2026-01-23 16:44:19.417262744 +0000 UTC m=+1660.564504735" Jan 23 16:44:21 crc kubenswrapper[4718]: I0123 16:44:21.406685 4718 generic.go:334] "Generic (PLEG): container finished" podID="6e86bb18-ca80-49f5-9de6-46737ff29374" containerID="ac34efea6caa89dc9fab5c176b324c6903d3e77806387f4430666d0f7453af90" exitCode=0 Jan 23 16:44:21 crc kubenswrapper[4718]: I0123 16:44:21.406770 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6e86bb18-ca80-49f5-9de6-46737ff29374","Type":"ContainerDied","Data":"ac34efea6caa89dc9fab5c176b324c6903d3e77806387f4430666d0f7453af90"} Jan 23 16:44:22 crc kubenswrapper[4718]: I0123 16:44:22.424042 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6e86bb18-ca80-49f5-9de6-46737ff29374","Type":"ContainerStarted","Data":"31ac9085d96419c507417b7c6535471f991036cb12e4b1491e648cc5ae20ef36"} Jan 23 16:44:22 crc kubenswrapper[4718]: I0123 16:44:22.424814 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 23 16:44:22 crc kubenswrapper[4718]: I0123 16:44:22.458835 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.458813317 podStartE2EDuration="37.458813317s" podCreationTimestamp="2026-01-23 16:43:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:44:22.44930422 +0000 UTC m=+1663.596546211" watchObservedRunningTime="2026-01-23 16:44:22.458813317 +0000 UTC m=+1663.606055308" Jan 23 16:44:28 crc kubenswrapper[4718]: I0123 16:44:28.876065 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:44:28 crc kubenswrapper[4718]: I0123 16:44:28.877861 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:44:35 crc kubenswrapper[4718]: I0123 16:44:35.903940 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 23 16:44:35 crc kubenswrapper[4718]: I0123 16:44:35.989718 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 16:44:40 crc kubenswrapper[4718]: I0123 16:44:40.833041 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="c94d7446-3c05-408a-a815-fe9adcb5e785" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 23 16:44:40 crc kubenswrapper[4718]: I0123 16:44:40.927275 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="c94d7446-3c05-408a-a815-fe9adcb5e785" containerName="rabbitmq" containerID="cri-o://8bafde17a67e2578f6f414e8c50f1a75225e34ba0f99c2d3da5df349b12e460b" gracePeriod=604796 Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.706862 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.804030 4718 generic.go:334] "Generic (PLEG): container finished" podID="c94d7446-3c05-408a-a815-fe9adcb5e785" containerID="8bafde17a67e2578f6f414e8c50f1a75225e34ba0f99c2d3da5df349b12e460b" exitCode=0 Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.804085 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c94d7446-3c05-408a-a815-fe9adcb5e785","Type":"ContainerDied","Data":"8bafde17a67e2578f6f414e8c50f1a75225e34ba0f99c2d3da5df349b12e460b"} Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.804115 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c94d7446-3c05-408a-a815-fe9adcb5e785","Type":"ContainerDied","Data":"90dc1165db2f99497fd341d8ab217497862f235efe2b028fc5b88b6ae955b3e9"} Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.804135 4718 scope.go:117] "RemoveContainer" containerID="8bafde17a67e2578f6f414e8c50f1a75225e34ba0f99c2d3da5df349b12e460b" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.804290 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.830368 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-erlang-cookie\") pod \"c94d7446-3c05-408a-a815-fe9adcb5e785\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.830454 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-tls\") pod \"c94d7446-3c05-408a-a815-fe9adcb5e785\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.830519 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c94d7446-3c05-408a-a815-fe9adcb5e785-plugins-conf\") pod \"c94d7446-3c05-408a-a815-fe9adcb5e785\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.830545 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c94d7446-3c05-408a-a815-fe9adcb5e785-erlang-cookie-secret\") pod \"c94d7446-3c05-408a-a815-fe9adcb5e785\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.830579 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c94d7446-3c05-408a-a815-fe9adcb5e785-server-conf\") pod \"c94d7446-3c05-408a-a815-fe9adcb5e785\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.830679 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c94d7446-3c05-408a-a815-fe9adcb5e785-config-data\") pod \"c94d7446-3c05-408a-a815-fe9adcb5e785\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.830885 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-plugins\") pod \"c94d7446-3c05-408a-a815-fe9adcb5e785\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.831109 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c94d7446-3c05-408a-a815-fe9adcb5e785-pod-info\") pod \"c94d7446-3c05-408a-a815-fe9adcb5e785\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.831153 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-confd\") pod \"c94d7446-3c05-408a-a815-fe9adcb5e785\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.831211 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rxfx\" (UniqueName: \"kubernetes.io/projected/c94d7446-3c05-408a-a815-fe9adcb5e785-kube-api-access-7rxfx\") pod \"c94d7446-3c05-408a-a815-fe9adcb5e785\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.831353 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c94d7446-3c05-408a-a815-fe9adcb5e785-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c94d7446-3c05-408a-a815-fe9adcb5e785" (UID: "c94d7446-3c05-408a-a815-fe9adcb5e785"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.831782 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\") pod \"c94d7446-3c05-408a-a815-fe9adcb5e785\" (UID: \"c94d7446-3c05-408a-a815-fe9adcb5e785\") " Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.832496 4718 scope.go:117] "RemoveContainer" containerID="75dbeb38b5f16886cb08f46c9697dc967e0bf0d8c07b80289178d7452c3e8e49" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.832557 4718 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c94d7446-3c05-408a-a815-fe9adcb5e785-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.831369 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c94d7446-3c05-408a-a815-fe9adcb5e785" (UID: "c94d7446-3c05-408a-a815-fe9adcb5e785"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.840292 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94d7446-3c05-408a-a815-fe9adcb5e785-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c94d7446-3c05-408a-a815-fe9adcb5e785" (UID: "c94d7446-3c05-408a-a815-fe9adcb5e785"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.861846 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c94d7446-3c05-408a-a815-fe9adcb5e785" (UID: "c94d7446-3c05-408a-a815-fe9adcb5e785"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.863462 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c94d7446-3c05-408a-a815-fe9adcb5e785-pod-info" (OuterVolumeSpecName: "pod-info") pod "c94d7446-3c05-408a-a815-fe9adcb5e785" (UID: "c94d7446-3c05-408a-a815-fe9adcb5e785"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.865791 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c94d7446-3c05-408a-a815-fe9adcb5e785-kube-api-access-7rxfx" (OuterVolumeSpecName: "kube-api-access-7rxfx") pod "c94d7446-3c05-408a-a815-fe9adcb5e785" (UID: "c94d7446-3c05-408a-a815-fe9adcb5e785"). InnerVolumeSpecName "kube-api-access-7rxfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.877351 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c94d7446-3c05-408a-a815-fe9adcb5e785" (UID: "c94d7446-3c05-408a-a815-fe9adcb5e785"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.877720 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58" (OuterVolumeSpecName: "persistence") pod "c94d7446-3c05-408a-a815-fe9adcb5e785" (UID: "c94d7446-3c05-408a-a815-fe9adcb5e785"). InnerVolumeSpecName "pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.918303 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c94d7446-3c05-408a-a815-fe9adcb5e785-config-data" (OuterVolumeSpecName: "config-data") pod "c94d7446-3c05-408a-a815-fe9adcb5e785" (UID: "c94d7446-3c05-408a-a815-fe9adcb5e785"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.934761 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rxfx\" (UniqueName: \"kubernetes.io/projected/c94d7446-3c05-408a-a815-fe9adcb5e785-kube-api-access-7rxfx\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.934807 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\") on node \"crc\" " Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.934821 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.934830 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.934841 4718 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c94d7446-3c05-408a-a815-fe9adcb5e785-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.934851 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c94d7446-3c05-408a-a815-fe9adcb5e785-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.934858 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.934868 4718 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c94d7446-3c05-408a-a815-fe9adcb5e785-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.976898 4718 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.977060 4718 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58") on node "crc" Jan 23 16:44:47 crc kubenswrapper[4718]: I0123 16:44:47.994784 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c94d7446-3c05-408a-a815-fe9adcb5e785-server-conf" (OuterVolumeSpecName: "server-conf") pod "c94d7446-3c05-408a-a815-fe9adcb5e785" (UID: "c94d7446-3c05-408a-a815-fe9adcb5e785"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.017433 4718 scope.go:117] "RemoveContainer" containerID="8bafde17a67e2578f6f414e8c50f1a75225e34ba0f99c2d3da5df349b12e460b" Jan 23 16:44:48 crc kubenswrapper[4718]: E0123 16:44:48.019930 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bafde17a67e2578f6f414e8c50f1a75225e34ba0f99c2d3da5df349b12e460b\": container with ID starting with 8bafde17a67e2578f6f414e8c50f1a75225e34ba0f99c2d3da5df349b12e460b not found: ID does not exist" containerID="8bafde17a67e2578f6f414e8c50f1a75225e34ba0f99c2d3da5df349b12e460b" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.019974 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bafde17a67e2578f6f414e8c50f1a75225e34ba0f99c2d3da5df349b12e460b"} err="failed to get container status \"8bafde17a67e2578f6f414e8c50f1a75225e34ba0f99c2d3da5df349b12e460b\": rpc error: code = NotFound desc = could not find container \"8bafde17a67e2578f6f414e8c50f1a75225e34ba0f99c2d3da5df349b12e460b\": container with ID starting with 8bafde17a67e2578f6f414e8c50f1a75225e34ba0f99c2d3da5df349b12e460b not found: ID does not exist" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.019996 4718 scope.go:117] "RemoveContainer" containerID="75dbeb38b5f16886cb08f46c9697dc967e0bf0d8c07b80289178d7452c3e8e49" Jan 23 16:44:48 crc kubenswrapper[4718]: E0123 16:44:48.020536 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75dbeb38b5f16886cb08f46c9697dc967e0bf0d8c07b80289178d7452c3e8e49\": container with ID starting with 75dbeb38b5f16886cb08f46c9697dc967e0bf0d8c07b80289178d7452c3e8e49 not found: ID does not exist" containerID="75dbeb38b5f16886cb08f46c9697dc967e0bf0d8c07b80289178d7452c3e8e49" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.020562 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75dbeb38b5f16886cb08f46c9697dc967e0bf0d8c07b80289178d7452c3e8e49"} err="failed to get container status \"75dbeb38b5f16886cb08f46c9697dc967e0bf0d8c07b80289178d7452c3e8e49\": rpc error: code = NotFound desc = could not find container \"75dbeb38b5f16886cb08f46c9697dc967e0bf0d8c07b80289178d7452c3e8e49\": container with ID starting with 75dbeb38b5f16886cb08f46c9697dc967e0bf0d8c07b80289178d7452c3e8e49 not found: ID does not exist" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.036864 4718 reconciler_common.go:293] "Volume detached for volume \"pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.036900 4718 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c94d7446-3c05-408a-a815-fe9adcb5e785-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.049811 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c94d7446-3c05-408a-a815-fe9adcb5e785" (UID: "c94d7446-3c05-408a-a815-fe9adcb5e785"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.158434 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c94d7446-3c05-408a-a815-fe9adcb5e785-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.190568 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.241310 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.287247 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 16:44:48 crc kubenswrapper[4718]: E0123 16:44:48.288761 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c94d7446-3c05-408a-a815-fe9adcb5e785" containerName="setup-container" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.288782 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c94d7446-3c05-408a-a815-fe9adcb5e785" containerName="setup-container" Jan 23 16:44:48 crc kubenswrapper[4718]: E0123 16:44:48.289022 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c94d7446-3c05-408a-a815-fe9adcb5e785" containerName="rabbitmq" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.289029 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c94d7446-3c05-408a-a815-fe9adcb5e785" containerName="rabbitmq" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.289623 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c94d7446-3c05-408a-a815-fe9adcb5e785" containerName="rabbitmq" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.292134 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.357969 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 16:44:48 crc kubenswrapper[4718]: E0123 16:44:48.382569 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc94d7446_3c05_408a_a815_fe9adcb5e785.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc94d7446_3c05_408a_a815_fe9adcb5e785.slice/crio-90dc1165db2f99497fd341d8ab217497862f235efe2b028fc5b88b6ae955b3e9\": RecentStats: unable to find data in memory cache]" Jan 23 16:44:48 crc kubenswrapper[4718]: E0123 16:44:48.382809 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc94d7446_3c05_408a_a815_fe9adcb5e785.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc94d7446_3c05_408a_a815_fe9adcb5e785.slice/crio-90dc1165db2f99497fd341d8ab217497862f235efe2b028fc5b88b6ae955b3e9\": RecentStats: unable to find data in memory cache]" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.471171 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.471772 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.471869 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.471971 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-config-data\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.472311 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmrks\" (UniqueName: \"kubernetes.io/projected/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-kube-api-access-mmrks\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.472588 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.472755 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.472811 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.473084 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.473152 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.473339 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.575585 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmrks\" (UniqueName: \"kubernetes.io/projected/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-kube-api-access-mmrks\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.575689 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.575746 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.575773 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.575836 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.575859 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.575899 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.575940 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.575987 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.576015 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.576040 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-config-data\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.576687 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.576769 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.578195 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.579529 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.579595 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.579595 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6c9f31eea737dc8db7a31716687ef2b138714b4086616d0fd0cdf6b05b7e9535/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.581230 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.581855 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-config-data\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.585564 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.591215 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.592246 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.598394 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmrks\" (UniqueName: \"kubernetes.io/projected/6854d6fc-92af-4083-a2e2-2f41dd9d2a73-kube-api-access-mmrks\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.683497 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07411bfa-3bca-4f5d-9e72-610dddb7de58\") pod \"rabbitmq-server-0\" (UID: \"6854d6fc-92af-4083-a2e2-2f41dd9d2a73\") " pod="openstack/rabbitmq-server-0" Jan 23 16:44:48 crc kubenswrapper[4718]: I0123 16:44:48.955217 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 16:44:49 crc kubenswrapper[4718]: I0123 16:44:49.163170 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c94d7446-3c05-408a-a815-fe9adcb5e785" path="/var/lib/kubelet/pods/c94d7446-3c05-408a-a815-fe9adcb5e785/volumes" Jan 23 16:44:49 crc kubenswrapper[4718]: I0123 16:44:49.665385 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 16:44:49 crc kubenswrapper[4718]: I0123 16:44:49.828335 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6854d6fc-92af-4083-a2e2-2f41dd9d2a73","Type":"ContainerStarted","Data":"bf84e4e69925e52466f7e087eb3f53ae7946d4839a62ebfcf3ae8df7373b918b"} Jan 23 16:44:51 crc kubenswrapper[4718]: I0123 16:44:51.855288 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6854d6fc-92af-4083-a2e2-2f41dd9d2a73","Type":"ContainerStarted","Data":"a89aa1d7baaebb266f135e1000abc23ba31f353a24d7b8011a9c592611f5d525"} Jan 23 16:44:58 crc kubenswrapper[4718]: I0123 16:44:58.244889 4718 scope.go:117] "RemoveContainer" containerID="91e20785aa4478e658e6b3ce9c61dd60a5405b430a5ab681c0dacea39dbc96c2" Jan 23 16:44:58 crc kubenswrapper[4718]: I0123 16:44:58.285958 4718 scope.go:117] "RemoveContainer" containerID="fe06aa906b6ed316e84c76663bba9b775bb3e1e469391938485f2ead3d54a80a" Jan 23 16:44:58 crc kubenswrapper[4718]: I0123 16:44:58.333881 4718 scope.go:117] "RemoveContainer" containerID="63ae5cd414289f120bd643906abd81fcc995c7ab6486b15baf278693935e168e" Jan 23 16:44:58 crc kubenswrapper[4718]: I0123 16:44:58.397501 4718 scope.go:117] "RemoveContainer" containerID="eb34100650b2eb4b495ca7df5a60b353fdd6bc115d1d32973aa36891ea35b349" Jan 23 16:44:58 crc kubenswrapper[4718]: I0123 16:44:58.429819 4718 scope.go:117] "RemoveContainer" containerID="d42542d148e6b6999321a32ed3614146e64ca825b9a35f49b99cda3f13668efd" Jan 23 16:44:58 crc kubenswrapper[4718]: I0123 16:44:58.495747 4718 scope.go:117] "RemoveContainer" containerID="03fc5bd2a6a4b9f99dc015ed34bf0078a5965512a869f1ca4a4ef87aaf4368c1" Jan 23 16:44:58 crc kubenswrapper[4718]: I0123 16:44:58.560120 4718 scope.go:117] "RemoveContainer" containerID="d9fb76de9125c923237386a4a7b56a2ce2a6ab5db16b13e71f75d42c05e07922" Jan 23 16:44:58 crc kubenswrapper[4718]: I0123 16:44:58.601966 4718 scope.go:117] "RemoveContainer" containerID="8d06eaf6a30db9574ea4b92daad6e40594c9b4746c535755881b5a1a8c5bd46b" Jan 23 16:44:58 crc kubenswrapper[4718]: I0123 16:44:58.643289 4718 scope.go:117] "RemoveContainer" containerID="0234150c1a9fb395f32836b9e738e868435a0f389fee04c20af4b8d3f6f3fa50" Jan 23 16:44:58 crc kubenswrapper[4718]: I0123 16:44:58.670952 4718 scope.go:117] "RemoveContainer" containerID="41631682393d84d8a47c83a9e719f972c08836c1092c617cc9085470498ff6a0" Jan 23 16:44:58 crc kubenswrapper[4718]: I0123 16:44:58.875760 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:44:58 crc kubenswrapper[4718]: I0123 16:44:58.875877 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:44:58 crc kubenswrapper[4718]: I0123 16:44:58.875949 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:44:58 crc kubenswrapper[4718]: I0123 16:44:58.877542 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 16:44:58 crc kubenswrapper[4718]: I0123 16:44:58.877715 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" gracePeriod=600 Jan 23 16:44:59 crc kubenswrapper[4718]: E0123 16:44:59.013255 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:44:59 crc kubenswrapper[4718]: I0123 16:44:59.957945 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" exitCode=0 Jan 23 16:44:59 crc kubenswrapper[4718]: I0123 16:44:59.958038 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec"} Jan 23 16:44:59 crc kubenswrapper[4718]: I0123 16:44:59.958434 4718 scope.go:117] "RemoveContainer" containerID="6f9740f575ccf5aef232552297b1345164a1e07af1b6f8f7ad7a166d05348d0a" Jan 23 16:44:59 crc kubenswrapper[4718]: I0123 16:44:59.959292 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:44:59 crc kubenswrapper[4718]: E0123 16:44:59.959595 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:45:00 crc kubenswrapper[4718]: I0123 16:45:00.166595 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk"] Jan 23 16:45:00 crc kubenswrapper[4718]: I0123 16:45:00.169201 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk" Jan 23 16:45:00 crc kubenswrapper[4718]: I0123 16:45:00.171943 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 16:45:00 crc kubenswrapper[4718]: I0123 16:45:00.172199 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 16:45:00 crc kubenswrapper[4718]: I0123 16:45:00.200125 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk"] Jan 23 16:45:00 crc kubenswrapper[4718]: I0123 16:45:00.309749 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhfbn\" (UniqueName: \"kubernetes.io/projected/a4e4a006-cff5-4912-87d0-89623f70d934-kube-api-access-zhfbn\") pod \"collect-profiles-29486445-qs2wk\" (UID: \"a4e4a006-cff5-4912-87d0-89623f70d934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk" Jan 23 16:45:00 crc kubenswrapper[4718]: I0123 16:45:00.309795 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4e4a006-cff5-4912-87d0-89623f70d934-secret-volume\") pod \"collect-profiles-29486445-qs2wk\" (UID: \"a4e4a006-cff5-4912-87d0-89623f70d934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk" Jan 23 16:45:00 crc kubenswrapper[4718]: I0123 16:45:00.310397 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4e4a006-cff5-4912-87d0-89623f70d934-config-volume\") pod \"collect-profiles-29486445-qs2wk\" (UID: \"a4e4a006-cff5-4912-87d0-89623f70d934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk" Jan 23 16:45:00 crc kubenswrapper[4718]: I0123 16:45:00.412372 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhfbn\" (UniqueName: \"kubernetes.io/projected/a4e4a006-cff5-4912-87d0-89623f70d934-kube-api-access-zhfbn\") pod \"collect-profiles-29486445-qs2wk\" (UID: \"a4e4a006-cff5-4912-87d0-89623f70d934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk" Jan 23 16:45:00 crc kubenswrapper[4718]: I0123 16:45:00.412456 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4e4a006-cff5-4912-87d0-89623f70d934-secret-volume\") pod \"collect-profiles-29486445-qs2wk\" (UID: \"a4e4a006-cff5-4912-87d0-89623f70d934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk" Jan 23 16:45:00 crc kubenswrapper[4718]: I0123 16:45:00.412741 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4e4a006-cff5-4912-87d0-89623f70d934-config-volume\") pod \"collect-profiles-29486445-qs2wk\" (UID: \"a4e4a006-cff5-4912-87d0-89623f70d934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk" Jan 23 16:45:00 crc kubenswrapper[4718]: I0123 16:45:00.413969 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4e4a006-cff5-4912-87d0-89623f70d934-config-volume\") pod \"collect-profiles-29486445-qs2wk\" (UID: \"a4e4a006-cff5-4912-87d0-89623f70d934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk" Jan 23 16:45:00 crc kubenswrapper[4718]: I0123 16:45:00.425664 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4e4a006-cff5-4912-87d0-89623f70d934-secret-volume\") pod \"collect-profiles-29486445-qs2wk\" (UID: \"a4e4a006-cff5-4912-87d0-89623f70d934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk" Jan 23 16:45:00 crc kubenswrapper[4718]: I0123 16:45:00.433360 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhfbn\" (UniqueName: \"kubernetes.io/projected/a4e4a006-cff5-4912-87d0-89623f70d934-kube-api-access-zhfbn\") pod \"collect-profiles-29486445-qs2wk\" (UID: \"a4e4a006-cff5-4912-87d0-89623f70d934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk" Jan 23 16:45:00 crc kubenswrapper[4718]: I0123 16:45:00.498318 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk" Jan 23 16:45:00 crc kubenswrapper[4718]: I0123 16:45:00.997487 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk"] Jan 23 16:45:01 crc kubenswrapper[4718]: I0123 16:45:01.996453 4718 generic.go:334] "Generic (PLEG): container finished" podID="a4e4a006-cff5-4912-87d0-89623f70d934" containerID="a86e36d6ef8cb83be478bda28568913dd980d79cce0f31440c1ff18e52bc8b4f" exitCode=0 Jan 23 16:45:01 crc kubenswrapper[4718]: I0123 16:45:01.996549 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk" event={"ID":"a4e4a006-cff5-4912-87d0-89623f70d934","Type":"ContainerDied","Data":"a86e36d6ef8cb83be478bda28568913dd980d79cce0f31440c1ff18e52bc8b4f"} Jan 23 16:45:01 crc kubenswrapper[4718]: I0123 16:45:01.997121 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk" event={"ID":"a4e4a006-cff5-4912-87d0-89623f70d934","Type":"ContainerStarted","Data":"89cbad48ecb1b932d5176a74daf53998209398d7f2a1c86cb44a99d33416836c"} Jan 23 16:45:03 crc kubenswrapper[4718]: I0123 16:45:03.570004 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk" Jan 23 16:45:03 crc kubenswrapper[4718]: I0123 16:45:03.699679 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhfbn\" (UniqueName: \"kubernetes.io/projected/a4e4a006-cff5-4912-87d0-89623f70d934-kube-api-access-zhfbn\") pod \"a4e4a006-cff5-4912-87d0-89623f70d934\" (UID: \"a4e4a006-cff5-4912-87d0-89623f70d934\") " Jan 23 16:45:03 crc kubenswrapper[4718]: I0123 16:45:03.699780 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4e4a006-cff5-4912-87d0-89623f70d934-secret-volume\") pod \"a4e4a006-cff5-4912-87d0-89623f70d934\" (UID: \"a4e4a006-cff5-4912-87d0-89623f70d934\") " Jan 23 16:45:03 crc kubenswrapper[4718]: I0123 16:45:03.700007 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4e4a006-cff5-4912-87d0-89623f70d934-config-volume\") pod \"a4e4a006-cff5-4912-87d0-89623f70d934\" (UID: \"a4e4a006-cff5-4912-87d0-89623f70d934\") " Jan 23 16:45:03 crc kubenswrapper[4718]: I0123 16:45:03.701813 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e4a006-cff5-4912-87d0-89623f70d934-config-volume" (OuterVolumeSpecName: "config-volume") pod "a4e4a006-cff5-4912-87d0-89623f70d934" (UID: "a4e4a006-cff5-4912-87d0-89623f70d934"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:45:03 crc kubenswrapper[4718]: I0123 16:45:03.709264 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e4a006-cff5-4912-87d0-89623f70d934-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a4e4a006-cff5-4912-87d0-89623f70d934" (UID: "a4e4a006-cff5-4912-87d0-89623f70d934"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:45:03 crc kubenswrapper[4718]: I0123 16:45:03.715171 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4e4a006-cff5-4912-87d0-89623f70d934-kube-api-access-zhfbn" (OuterVolumeSpecName: "kube-api-access-zhfbn") pod "a4e4a006-cff5-4912-87d0-89623f70d934" (UID: "a4e4a006-cff5-4912-87d0-89623f70d934"). InnerVolumeSpecName "kube-api-access-zhfbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:45:03 crc kubenswrapper[4718]: I0123 16:45:03.803511 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4e4a006-cff5-4912-87d0-89623f70d934-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 16:45:03 crc kubenswrapper[4718]: I0123 16:45:03.803555 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4e4a006-cff5-4912-87d0-89623f70d934-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 16:45:03 crc kubenswrapper[4718]: I0123 16:45:03.803568 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhfbn\" (UniqueName: \"kubernetes.io/projected/a4e4a006-cff5-4912-87d0-89623f70d934-kube-api-access-zhfbn\") on node \"crc\" DevicePath \"\"" Jan 23 16:45:04 crc kubenswrapper[4718]: I0123 16:45:04.023045 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk" event={"ID":"a4e4a006-cff5-4912-87d0-89623f70d934","Type":"ContainerDied","Data":"89cbad48ecb1b932d5176a74daf53998209398d7f2a1c86cb44a99d33416836c"} Jan 23 16:45:04 crc kubenswrapper[4718]: I0123 16:45:04.023119 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89cbad48ecb1b932d5176a74daf53998209398d7f2a1c86cb44a99d33416836c" Jan 23 16:45:04 crc kubenswrapper[4718]: I0123 16:45:04.023066 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk" Jan 23 16:45:12 crc kubenswrapper[4718]: I0123 16:45:12.141166 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:45:12 crc kubenswrapper[4718]: E0123 16:45:12.141940 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:45:25 crc kubenswrapper[4718]: I0123 16:45:25.141283 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:45:25 crc kubenswrapper[4718]: E0123 16:45:25.142220 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:45:25 crc kubenswrapper[4718]: I0123 16:45:25.278715 4718 generic.go:334] "Generic (PLEG): container finished" podID="6854d6fc-92af-4083-a2e2-2f41dd9d2a73" containerID="a89aa1d7baaebb266f135e1000abc23ba31f353a24d7b8011a9c592611f5d525" exitCode=0 Jan 23 16:45:25 crc kubenswrapper[4718]: I0123 16:45:25.278763 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6854d6fc-92af-4083-a2e2-2f41dd9d2a73","Type":"ContainerDied","Data":"a89aa1d7baaebb266f135e1000abc23ba31f353a24d7b8011a9c592611f5d525"} Jan 23 16:45:26 crc kubenswrapper[4718]: I0123 16:45:26.293258 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6854d6fc-92af-4083-a2e2-2f41dd9d2a73","Type":"ContainerStarted","Data":"c85bc7ca8ccc86df9676850cca9c58bee5c26cf922f2fb1dee33d55bacca553b"} Jan 23 16:45:26 crc kubenswrapper[4718]: I0123 16:45:26.294115 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 16:45:38 crc kubenswrapper[4718]: I0123 16:45:38.140477 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:45:38 crc kubenswrapper[4718]: E0123 16:45:38.141246 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:45:38 crc kubenswrapper[4718]: I0123 16:45:38.958785 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 16:45:38 crc kubenswrapper[4718]: I0123 16:45:38.996105 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=50.996069804 podStartE2EDuration="50.996069804s" podCreationTimestamp="2026-01-23 16:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:45:26.319541115 +0000 UTC m=+1727.466783146" watchObservedRunningTime="2026-01-23 16:45:38.996069804 +0000 UTC m=+1740.143311795" Jan 23 16:45:49 crc kubenswrapper[4718]: I0123 16:45:49.152224 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:45:49 crc kubenswrapper[4718]: E0123 16:45:49.153325 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:46:00 crc kubenswrapper[4718]: I0123 16:46:00.141077 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:46:00 crc kubenswrapper[4718]: E0123 16:46:00.142221 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:46:11 crc kubenswrapper[4718]: I0123 16:46:11.141497 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:46:11 crc kubenswrapper[4718]: E0123 16:46:11.142174 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:46:25 crc kubenswrapper[4718]: I0123 16:46:25.140921 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:46:25 crc kubenswrapper[4718]: E0123 16:46:25.142362 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:46:36 crc kubenswrapper[4718]: I0123 16:46:36.065900 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-xgcbw"] Jan 23 16:46:36 crc kubenswrapper[4718]: I0123 16:46:36.085342 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2c18-account-create-update-q8bm5"] Jan 23 16:46:36 crc kubenswrapper[4718]: I0123 16:46:36.099122 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-xgcbw"] Jan 23 16:46:36 crc kubenswrapper[4718]: I0123 16:46:36.114251 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2c18-account-create-update-q8bm5"] Jan 23 16:46:37 crc kubenswrapper[4718]: I0123 16:46:37.153785 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41974084-91dc-4daa-ba00-561e0d52e4c2" path="/var/lib/kubelet/pods/41974084-91dc-4daa-ba00-561e0d52e4c2/volumes" Jan 23 16:46:37 crc kubenswrapper[4718]: I0123 16:46:37.155250 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="597decb2-86a1-400b-9635-cf0d9a10e643" path="/var/lib/kubelet/pods/597decb2-86a1-400b-9635-cf0d9a10e643/volumes" Jan 23 16:46:39 crc kubenswrapper[4718]: I0123 16:46:39.062132 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-psw8d"] Jan 23 16:46:39 crc kubenswrapper[4718]: I0123 16:46:39.081026 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-psw8d"] Jan 23 16:46:39 crc kubenswrapper[4718]: I0123 16:46:39.096614 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4677-account-create-update-cb4s5"] Jan 23 16:46:39 crc kubenswrapper[4718]: I0123 16:46:39.107818 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-g2mvr"] Jan 23 16:46:39 crc kubenswrapper[4718]: I0123 16:46:39.117834 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f265-account-create-update-kwnf2"] Jan 23 16:46:39 crc kubenswrapper[4718]: I0123 16:46:39.128839 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-4677-account-create-update-cb4s5"] Jan 23 16:46:39 crc kubenswrapper[4718]: I0123 16:46:39.152890 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:46:39 crc kubenswrapper[4718]: E0123 16:46:39.153451 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:46:39 crc kubenswrapper[4718]: I0123 16:46:39.156949 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="119f9654-89f5-40ae-b93a-ddde420e1a51" path="/var/lib/kubelet/pods/119f9654-89f5-40ae-b93a-ddde420e1a51/volumes" Jan 23 16:46:39 crc kubenswrapper[4718]: I0123 16:46:39.158442 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e538c6e-53f8-46c2-a8c0-8cd2562b70b1" path="/var/lib/kubelet/pods/6e538c6e-53f8-46c2-a8c0-8cd2562b70b1/volumes" Jan 23 16:46:39 crc kubenswrapper[4718]: I0123 16:46:39.159310 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f265-account-create-update-kwnf2"] Jan 23 16:46:39 crc kubenswrapper[4718]: I0123 16:46:39.159346 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-g2mvr"] Jan 23 16:46:40 crc kubenswrapper[4718]: I0123 16:46:40.031791 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-f9t8l"] Jan 23 16:46:40 crc kubenswrapper[4718]: I0123 16:46:40.046750 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-f9t8l"] Jan 23 16:46:41 crc kubenswrapper[4718]: I0123 16:46:41.159886 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a728186-fd9a-4f7f-9dea-4d781141e899" path="/var/lib/kubelet/pods/3a728186-fd9a-4f7f-9dea-4d781141e899/volumes" Jan 23 16:46:41 crc kubenswrapper[4718]: I0123 16:46:41.166039 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c70376de-08be-4cd5-a9b1-92366794a8ee" path="/var/lib/kubelet/pods/c70376de-08be-4cd5-a9b1-92366794a8ee/volumes" Jan 23 16:46:41 crc kubenswrapper[4718]: I0123 16:46:41.167252 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe962884-b75c-4b1d-963e-6e6aeec3b1a5" path="/var/lib/kubelet/pods/fe962884-b75c-4b1d-963e-6e6aeec3b1a5/volumes" Jan 23 16:46:43 crc kubenswrapper[4718]: I0123 16:46:43.036521 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-b01e-account-create-update-mrj22"] Jan 23 16:46:43 crc kubenswrapper[4718]: I0123 16:46:43.049813 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-b01e-account-create-update-mrj22"] Jan 23 16:46:43 crc kubenswrapper[4718]: I0123 16:46:43.158696 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3d7193d-e220-4a8a-bf50-4de2784c88ec" path="/var/lib/kubelet/pods/b3d7193d-e220-4a8a-bf50-4de2784c88ec/volumes" Jan 23 16:46:51 crc kubenswrapper[4718]: I0123 16:46:51.141187 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:46:51 crc kubenswrapper[4718]: E0123 16:46:51.141980 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:46:58 crc kubenswrapper[4718]: I0123 16:46:58.991597 4718 scope.go:117] "RemoveContainer" containerID="8233f3106f9961c829e4a42d9ad2ed32959a4d702b0fc464b3293797c22ba5a8" Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.031394 4718 scope.go:117] "RemoveContainer" containerID="d6d42073e0ac2b8268bfc287145f498db5e903468e73417f5d2b289d90ce726e" Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.039117 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-1b7b-account-create-update-lnclr"] Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.054584 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-1b7b-account-create-update-lnclr"] Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.057902 4718 scope.go:117] "RemoveContainer" containerID="ca0d56176215fb25f11e21766e61d34db17935168f4616ab4bcdb6811bd158ce" Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.078148 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-f79tn"] Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.089328 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-f79tn"] Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.118850 4718 scope.go:117] "RemoveContainer" containerID="3245b7740a83e8caa35b2fe68e40e5b3aa1ffdcfe5df7f82ff75fe116bd3bba4" Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.155082 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a487cdb-2762-422f-bdb5-df30a8ce6f20" path="/var/lib/kubelet/pods/4a487cdb-2762-422f-bdb5-df30a8ce6f20/volumes" Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.156096 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc06a1f0-3166-4c08-bdef-e50e7ec1829f" path="/var/lib/kubelet/pods/bc06a1f0-3166-4c08-bdef-e50e7ec1829f/volumes" Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.168418 4718 scope.go:117] "RemoveContainer" containerID="82afed912c063964ca1ded2ab35e314bf4063d1993a265971a9440df06fd26db" Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.227499 4718 scope.go:117] "RemoveContainer" containerID="df4dd1404c98a52285568edef5cb0e077be4e1fba2f634a9bc8eb77cfd322e6a" Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.291768 4718 scope.go:117] "RemoveContainer" containerID="41e4d9e2a3c80de4fbbdc303b7cb60f4b62f9146f55c8af2bcc1df97d0838627" Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.336436 4718 scope.go:117] "RemoveContainer" containerID="952b0ed2a6d63c7f0b6bb18ceab5cb740988dbaf86ab16f90f2fd3e720b5c503" Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.376582 4718 scope.go:117] "RemoveContainer" containerID="b0d478e010846ab5a51efb0e69eca260cc0db9f38af91d508549bcc5efc957cc" Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.405506 4718 scope.go:117] "RemoveContainer" containerID="d365f3670a0bc24b8f47450a46faab08a73303d28afb7ed5cba5fc02417e9d60" Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.428893 4718 scope.go:117] "RemoveContainer" containerID="721e7c3896883ba1f671c64b7ca0e619f95742a3f7b640494a72c9bfcd355630" Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.449431 4718 scope.go:117] "RemoveContainer" containerID="c019728c4bd97f273acbd3e64866f54e85d86263074ec8113f92841283b49cdd" Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.470203 4718 scope.go:117] "RemoveContainer" containerID="0de3444983dca25de35a01586dd8a71457b705626820255bb47848314ab05a8e" Jan 23 16:46:59 crc kubenswrapper[4718]: I0123 16:46:59.490768 4718 scope.go:117] "RemoveContainer" containerID="3a9f3ecac3ac2af201ccdb1707e3672be5d151fade7a6dc5d2ea3f103e8f450c" Jan 23 16:47:02 crc kubenswrapper[4718]: I0123 16:47:02.035395 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-5lj86"] Jan 23 16:47:02 crc kubenswrapper[4718]: I0123 16:47:02.048319 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-5lj86"] Jan 23 16:47:03 crc kubenswrapper[4718]: I0123 16:47:03.148544 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:47:03 crc kubenswrapper[4718]: E0123 16:47:03.149164 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:47:03 crc kubenswrapper[4718]: I0123 16:47:03.176124 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f60062d-a297-484e-a230-41a8c9f8e5e4" path="/var/lib/kubelet/pods/2f60062d-a297-484e-a230-41a8c9f8e5e4/volumes" Jan 23 16:47:07 crc kubenswrapper[4718]: I0123 16:47:07.079516 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-5lntz"] Jan 23 16:47:07 crc kubenswrapper[4718]: I0123 16:47:07.108693 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-5lntz"] Jan 23 16:47:07 crc kubenswrapper[4718]: I0123 16:47:07.164102 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e891ff28-e439-4bee-8938-4f148f0b734d" path="/var/lib/kubelet/pods/e891ff28-e439-4bee-8938-4f148f0b734d/volumes" Jan 23 16:47:07 crc kubenswrapper[4718]: I0123 16:47:07.164774 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-86cf-account-create-update-rzlzf"] Jan 23 16:47:07 crc kubenswrapper[4718]: I0123 16:47:07.168600 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9aa1-account-create-update-znt6h"] Jan 23 16:47:07 crc kubenswrapper[4718]: I0123 16:47:07.186191 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-b6d3-account-create-update-pxk6l"] Jan 23 16:47:07 crc kubenswrapper[4718]: I0123 16:47:07.198777 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-l4rtr"] Jan 23 16:47:07 crc kubenswrapper[4718]: I0123 16:47:07.211254 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-86cf-account-create-update-rzlzf"] Jan 23 16:47:07 crc kubenswrapper[4718]: I0123 16:47:07.223021 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-k9b4f"] Jan 23 16:47:07 crc kubenswrapper[4718]: I0123 16:47:07.235838 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-9aa1-account-create-update-znt6h"] Jan 23 16:47:07 crc kubenswrapper[4718]: I0123 16:47:07.247739 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b5d4-account-create-update-zvckr"] Jan 23 16:47:07 crc kubenswrapper[4718]: I0123 16:47:07.262457 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-l4rtr"] Jan 23 16:47:07 crc kubenswrapper[4718]: I0123 16:47:07.278195 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-b6d3-account-create-update-pxk6l"] Jan 23 16:47:07 crc kubenswrapper[4718]: I0123 16:47:07.290608 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b5d4-account-create-update-zvckr"] Jan 23 16:47:07 crc kubenswrapper[4718]: I0123 16:47:07.302498 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-k9b4f"] Jan 23 16:47:09 crc kubenswrapper[4718]: I0123 16:47:09.152029 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c2206d1-ef66-450d-a83a-9c7c17b8e96d" path="/var/lib/kubelet/pods/0c2206d1-ef66-450d-a83a-9c7c17b8e96d/volumes" Jan 23 16:47:09 crc kubenswrapper[4718]: I0123 16:47:09.153172 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28234654-ae42-4f18-88ed-74e66f7b91a3" path="/var/lib/kubelet/pods/28234654-ae42-4f18-88ed-74e66f7b91a3/volumes" Jan 23 16:47:09 crc kubenswrapper[4718]: I0123 16:47:09.153763 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b87063a9-a1b8-4120-9770-f939e3e16a7b" path="/var/lib/kubelet/pods/b87063a9-a1b8-4120-9770-f939e3e16a7b/volumes" Jan 23 16:47:09 crc kubenswrapper[4718]: I0123 16:47:09.154345 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e99e0f19-90ef-4836-a587-0e137f44c7bb" path="/var/lib/kubelet/pods/e99e0f19-90ef-4836-a587-0e137f44c7bb/volumes" Jan 23 16:47:09 crc kubenswrapper[4718]: I0123 16:47:09.155407 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3dc1ea4-e411-4a36-ace6-9c026462f7d0" path="/var/lib/kubelet/pods/f3dc1ea4-e411-4a36-ace6-9c026462f7d0/volumes" Jan 23 16:47:09 crc kubenswrapper[4718]: I0123 16:47:09.156003 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="febec68a-b3c0-44fe-853f-7978c5012430" path="/var/lib/kubelet/pods/febec68a-b3c0-44fe-853f-7978c5012430/volumes" Jan 23 16:47:15 crc kubenswrapper[4718]: I0123 16:47:15.041948 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-s2sfc"] Jan 23 16:47:15 crc kubenswrapper[4718]: I0123 16:47:15.053677 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-s2sfc"] Jan 23 16:47:15 crc kubenswrapper[4718]: I0123 16:47:15.140783 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:47:15 crc kubenswrapper[4718]: E0123 16:47:15.141271 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:47:15 crc kubenswrapper[4718]: I0123 16:47:15.172580 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8" path="/var/lib/kubelet/pods/d3e031b6-6ffd-4259-a8c9-57e1cde7ecc8/volumes" Jan 23 16:47:29 crc kubenswrapper[4718]: I0123 16:47:29.149171 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:47:29 crc kubenswrapper[4718]: E0123 16:47:29.150012 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:47:40 crc kubenswrapper[4718]: I0123 16:47:40.141695 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:47:40 crc kubenswrapper[4718]: E0123 16:47:40.142784 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:47:46 crc kubenswrapper[4718]: I0123 16:47:46.051058 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-cl7vm"] Jan 23 16:47:46 crc kubenswrapper[4718]: I0123 16:47:46.064278 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-vbzkx"] Jan 23 16:47:46 crc kubenswrapper[4718]: I0123 16:47:46.077435 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-cl7vm"] Jan 23 16:47:46 crc kubenswrapper[4718]: I0123 16:47:46.088082 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-vbzkx"] Jan 23 16:47:47 crc kubenswrapper[4718]: I0123 16:47:47.152586 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e4da531-c3ee-4c93-8de9-60f9c0ce9858" path="/var/lib/kubelet/pods/2e4da531-c3ee-4c93-8de9-60f9c0ce9858/volumes" Jan 23 16:47:47 crc kubenswrapper[4718]: I0123 16:47:47.153571 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdefc9e4-c27d-4b53-a75a-5a74124d31f2" path="/var/lib/kubelet/pods/fdefc9e4-c27d-4b53-a75a-5a74124d31f2/volumes" Jan 23 16:47:54 crc kubenswrapper[4718]: I0123 16:47:54.140536 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:47:54 crc kubenswrapper[4718]: E0123 16:47:54.141366 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:47:57 crc kubenswrapper[4718]: I0123 16:47:57.076474 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-bvdgb"] Jan 23 16:47:57 crc kubenswrapper[4718]: I0123 16:47:57.095990 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-bvdgb"] Jan 23 16:47:57 crc kubenswrapper[4718]: I0123 16:47:57.153234 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beac9063-62c9-4cb1-aa45-786d02b1e9db" path="/var/lib/kubelet/pods/beac9063-62c9-4cb1-aa45-786d02b1e9db/volumes" Jan 23 16:47:58 crc kubenswrapper[4718]: I0123 16:47:58.036312 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-xpbng"] Jan 23 16:47:58 crc kubenswrapper[4718]: I0123 16:47:58.051052 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-xpbng"] Jan 23 16:47:59 crc kubenswrapper[4718]: I0123 16:47:59.032623 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-9c6mc"] Jan 23 16:47:59 crc kubenswrapper[4718]: I0123 16:47:59.044352 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-9c6mc"] Jan 23 16:47:59 crc kubenswrapper[4718]: I0123 16:47:59.153782 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27796574-e773-413a-9d32-beb6e99cd093" path="/var/lib/kubelet/pods/27796574-e773-413a-9d32-beb6e99cd093/volumes" Jan 23 16:47:59 crc kubenswrapper[4718]: I0123 16:47:59.155419 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c" path="/var/lib/kubelet/pods/2d083ea0-8d2d-4ca7-8530-2f558ad8bf5c/volumes" Jan 23 16:47:59 crc kubenswrapper[4718]: I0123 16:47:59.773894 4718 scope.go:117] "RemoveContainer" containerID="1804ffa282c99d771644734d87712803fde542c04bd872b3637d41045dacb23a" Jan 23 16:47:59 crc kubenswrapper[4718]: I0123 16:47:59.809562 4718 scope.go:117] "RemoveContainer" containerID="feca561fd8b606f7d184cd45cd011d5b0fecdefafa244f915fa599d0e6940a3f" Jan 23 16:47:59 crc kubenswrapper[4718]: I0123 16:47:59.857322 4718 scope.go:117] "RemoveContainer" containerID="9bc0c103f1ca07aa38be9bd4661db3a8ceb2b993b8bd92ad1c2a22f7dbbbb239" Jan 23 16:47:59 crc kubenswrapper[4718]: I0123 16:47:59.907422 4718 scope.go:117] "RemoveContainer" containerID="2a1fe088040d79cfbc4a658edb49b2d121b216c99aff74320d8f7c9c0e60d21c" Jan 23 16:47:59 crc kubenswrapper[4718]: I0123 16:47:59.972094 4718 scope.go:117] "RemoveContainer" containerID="4749417a1d1a410fef9090a5374e4a3b4c6863f00118ba70d47ce688475f1031" Jan 23 16:48:00 crc kubenswrapper[4718]: I0123 16:48:00.021365 4718 scope.go:117] "RemoveContainer" containerID="f0d4990536a6dfd88ffba2bd09d8011bdc6fbcbefe8d58bb47df746480f20e2d" Jan 23 16:48:00 crc kubenswrapper[4718]: I0123 16:48:00.085134 4718 scope.go:117] "RemoveContainer" containerID="4e5cc4d921274cf3d9f106afe984955d127ec71f994dcb3d025e89edba83c862" Jan 23 16:48:00 crc kubenswrapper[4718]: I0123 16:48:00.112441 4718 scope.go:117] "RemoveContainer" containerID="72f8578ac8c3194b0a4d4f079f4f7310bbac1f7801a1a5e1b3c146b4845377a2" Jan 23 16:48:00 crc kubenswrapper[4718]: I0123 16:48:00.139001 4718 scope.go:117] "RemoveContainer" containerID="3791419ffe808d0de7da234ef9f296849604382d618a3582e1957ca9d31224b1" Jan 23 16:48:00 crc kubenswrapper[4718]: I0123 16:48:00.164030 4718 scope.go:117] "RemoveContainer" containerID="d3d02f885ca0ab08a56d12705e8fc0fd29afb1b9820ba4c7b07312b64139eb4b" Jan 23 16:48:00 crc kubenswrapper[4718]: I0123 16:48:00.198755 4718 scope.go:117] "RemoveContainer" containerID="01d6a7d89e4e4fa76f26a2197178880a56f7ab533a47ffb4fdda5a7e34254d20" Jan 23 16:48:00 crc kubenswrapper[4718]: I0123 16:48:00.224836 4718 scope.go:117] "RemoveContainer" containerID="657997542b08555325b68c9d865a781d35f4875c890986e8685764a61cf7b0fd" Jan 23 16:48:00 crc kubenswrapper[4718]: I0123 16:48:00.245060 4718 scope.go:117] "RemoveContainer" containerID="71cf5ab0e0d3b8ffb7793dbe9fbdb8fe1df804b842806e3b6f135145a12b8a29" Jan 23 16:48:00 crc kubenswrapper[4718]: I0123 16:48:00.273474 4718 scope.go:117] "RemoveContainer" containerID="3f1d84f4b2e206b1284b0c4c3cdf79633cf179efe2ff95cac6f68505a6ff1245" Jan 23 16:48:00 crc kubenswrapper[4718]: I0123 16:48:00.299237 4718 scope.go:117] "RemoveContainer" containerID="de9217333a38058157ba4402a01353a7b9513cddfb908b6ef47fce5ae04997d2" Jan 23 16:48:09 crc kubenswrapper[4718]: I0123 16:48:09.147846 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:48:09 crc kubenswrapper[4718]: E0123 16:48:09.148851 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:48:11 crc kubenswrapper[4718]: I0123 16:48:11.240689 4718 generic.go:334] "Generic (PLEG): container finished" podID="d78674b0-cdd9-4a34-a2d0-b9eece735396" containerID="f38a58d69380493c330f20e0fd710e0a1ed05263931ec6a181e8415ebd27a329" exitCode=0 Jan 23 16:48:11 crc kubenswrapper[4718]: I0123 16:48:11.240773 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" event={"ID":"d78674b0-cdd9-4a34-a2d0-b9eece735396","Type":"ContainerDied","Data":"f38a58d69380493c330f20e0fd710e0a1ed05263931ec6a181e8415ebd27a329"} Jan 23 16:48:11 crc kubenswrapper[4718]: E0123 16:48:11.349580 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd78674b0_cdd9_4a34_a2d0_b9eece735396.slice/crio-f38a58d69380493c330f20e0fd710e0a1ed05263931ec6a181e8415ebd27a329.scope\": RecentStats: unable to find data in memory cache]" Jan 23 16:48:12 crc kubenswrapper[4718]: I0123 16:48:12.740147 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" Jan 23 16:48:12 crc kubenswrapper[4718]: I0123 16:48:12.798622 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svwd6\" (UniqueName: \"kubernetes.io/projected/d78674b0-cdd9-4a34-a2d0-b9eece735396-kube-api-access-svwd6\") pod \"d78674b0-cdd9-4a34-a2d0-b9eece735396\" (UID: \"d78674b0-cdd9-4a34-a2d0-b9eece735396\") " Jan 23 16:48:12 crc kubenswrapper[4718]: I0123 16:48:12.798790 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d78674b0-cdd9-4a34-a2d0-b9eece735396-ssh-key-openstack-edpm-ipam\") pod \"d78674b0-cdd9-4a34-a2d0-b9eece735396\" (UID: \"d78674b0-cdd9-4a34-a2d0-b9eece735396\") " Jan 23 16:48:12 crc kubenswrapper[4718]: I0123 16:48:12.798878 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d78674b0-cdd9-4a34-a2d0-b9eece735396-inventory\") pod \"d78674b0-cdd9-4a34-a2d0-b9eece735396\" (UID: \"d78674b0-cdd9-4a34-a2d0-b9eece735396\") " Jan 23 16:48:12 crc kubenswrapper[4718]: I0123 16:48:12.799147 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78674b0-cdd9-4a34-a2d0-b9eece735396-bootstrap-combined-ca-bundle\") pod \"d78674b0-cdd9-4a34-a2d0-b9eece735396\" (UID: \"d78674b0-cdd9-4a34-a2d0-b9eece735396\") " Jan 23 16:48:12 crc kubenswrapper[4718]: I0123 16:48:12.807684 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d78674b0-cdd9-4a34-a2d0-b9eece735396-kube-api-access-svwd6" (OuterVolumeSpecName: "kube-api-access-svwd6") pod "d78674b0-cdd9-4a34-a2d0-b9eece735396" (UID: "d78674b0-cdd9-4a34-a2d0-b9eece735396"). InnerVolumeSpecName "kube-api-access-svwd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:48:12 crc kubenswrapper[4718]: I0123 16:48:12.809154 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78674b0-cdd9-4a34-a2d0-b9eece735396-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d78674b0-cdd9-4a34-a2d0-b9eece735396" (UID: "d78674b0-cdd9-4a34-a2d0-b9eece735396"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:48:12 crc kubenswrapper[4718]: I0123 16:48:12.840998 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78674b0-cdd9-4a34-a2d0-b9eece735396-inventory" (OuterVolumeSpecName: "inventory") pod "d78674b0-cdd9-4a34-a2d0-b9eece735396" (UID: "d78674b0-cdd9-4a34-a2d0-b9eece735396"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:48:12 crc kubenswrapper[4718]: I0123 16:48:12.844475 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78674b0-cdd9-4a34-a2d0-b9eece735396-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d78674b0-cdd9-4a34-a2d0-b9eece735396" (UID: "d78674b0-cdd9-4a34-a2d0-b9eece735396"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:48:12 crc kubenswrapper[4718]: I0123 16:48:12.902264 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svwd6\" (UniqueName: \"kubernetes.io/projected/d78674b0-cdd9-4a34-a2d0-b9eece735396-kube-api-access-svwd6\") on node \"crc\" DevicePath \"\"" Jan 23 16:48:12 crc kubenswrapper[4718]: I0123 16:48:12.902298 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d78674b0-cdd9-4a34-a2d0-b9eece735396-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 16:48:12 crc kubenswrapper[4718]: I0123 16:48:12.902309 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d78674b0-cdd9-4a34-a2d0-b9eece735396-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 16:48:12 crc kubenswrapper[4718]: I0123 16:48:12.902320 4718 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78674b0-cdd9-4a34-a2d0-b9eece735396-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.261652 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" event={"ID":"d78674b0-cdd9-4a34-a2d0-b9eece735396","Type":"ContainerDied","Data":"fd38d8f54c0d40bd67fe1bd8937a60dc00ba16f5b1d4fbb872eb322012c0fa65"} Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.261692 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd38d8f54c0d40bd67fe1bd8937a60dc00ba16f5b1d4fbb872eb322012c0fa65" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.261744 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.369909 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6"] Jan 23 16:48:13 crc kubenswrapper[4718]: E0123 16:48:13.370695 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e4a006-cff5-4912-87d0-89623f70d934" containerName="collect-profiles" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.370720 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e4a006-cff5-4912-87d0-89623f70d934" containerName="collect-profiles" Jan 23 16:48:13 crc kubenswrapper[4718]: E0123 16:48:13.370747 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78674b0-cdd9-4a34-a2d0-b9eece735396" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.370757 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78674b0-cdd9-4a34-a2d0-b9eece735396" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.371038 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78674b0-cdd9-4a34-a2d0-b9eece735396" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.371094 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e4a006-cff5-4912-87d0-89623f70d934" containerName="collect-profiles" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.372062 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.377093 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.377201 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.377224 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.381927 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.400809 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6"] Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.414058 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgh29\" (UniqueName: \"kubernetes.io/projected/c94d0e96-185e-4f09-bb48-9fb2e6874fec-kube-api-access-hgh29\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6\" (UID: \"c94d0e96-185e-4f09-bb48-9fb2e6874fec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.414241 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c94d0e96-185e-4f09-bb48-9fb2e6874fec-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6\" (UID: \"c94d0e96-185e-4f09-bb48-9fb2e6874fec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.414407 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c94d0e96-185e-4f09-bb48-9fb2e6874fec-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6\" (UID: \"c94d0e96-185e-4f09-bb48-9fb2e6874fec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.516907 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c94d0e96-185e-4f09-bb48-9fb2e6874fec-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6\" (UID: \"c94d0e96-185e-4f09-bb48-9fb2e6874fec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.517084 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgh29\" (UniqueName: \"kubernetes.io/projected/c94d0e96-185e-4f09-bb48-9fb2e6874fec-kube-api-access-hgh29\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6\" (UID: \"c94d0e96-185e-4f09-bb48-9fb2e6874fec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.517152 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c94d0e96-185e-4f09-bb48-9fb2e6874fec-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6\" (UID: \"c94d0e96-185e-4f09-bb48-9fb2e6874fec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.524216 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c94d0e96-185e-4f09-bb48-9fb2e6874fec-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6\" (UID: \"c94d0e96-185e-4f09-bb48-9fb2e6874fec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.526162 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c94d0e96-185e-4f09-bb48-9fb2e6874fec-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6\" (UID: \"c94d0e96-185e-4f09-bb48-9fb2e6874fec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.538821 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgh29\" (UniqueName: \"kubernetes.io/projected/c94d0e96-185e-4f09-bb48-9fb2e6874fec-kube-api-access-hgh29\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6\" (UID: \"c94d0e96-185e-4f09-bb48-9fb2e6874fec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6" Jan 23 16:48:13 crc kubenswrapper[4718]: I0123 16:48:13.700460 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6" Jan 23 16:48:14 crc kubenswrapper[4718]: I0123 16:48:14.316596 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6"] Jan 23 16:48:14 crc kubenswrapper[4718]: I0123 16:48:14.319953 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 16:48:15 crc kubenswrapper[4718]: I0123 16:48:15.711754 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6" event={"ID":"c94d0e96-185e-4f09-bb48-9fb2e6874fec","Type":"ContainerStarted","Data":"95d5783c3a80a81be7d58266d640d7fd6f1095332e72c16c0a54f25ddb48eae4"} Jan 23 16:48:16 crc kubenswrapper[4718]: I0123 16:48:16.723905 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6" event={"ID":"c94d0e96-185e-4f09-bb48-9fb2e6874fec","Type":"ContainerStarted","Data":"7d28df88a7136841eec59ce0686a066f02e30f494a44c5ccaf1e184bed880008"} Jan 23 16:48:16 crc kubenswrapper[4718]: I0123 16:48:16.757819 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6" podStartSLOduration=2.336871223 podStartE2EDuration="3.757802996s" podCreationTimestamp="2026-01-23 16:48:13 +0000 UTC" firstStartedPulling="2026-01-23 16:48:14.31967839 +0000 UTC m=+1895.466920381" lastFinishedPulling="2026-01-23 16:48:15.740610173 +0000 UTC m=+1896.887852154" observedRunningTime="2026-01-23 16:48:16.741482534 +0000 UTC m=+1897.888724525" watchObservedRunningTime="2026-01-23 16:48:16.757802996 +0000 UTC m=+1897.905044987" Jan 23 16:48:17 crc kubenswrapper[4718]: I0123 16:48:17.071807 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-b4crv"] Jan 23 16:48:17 crc kubenswrapper[4718]: I0123 16:48:17.086219 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-b4crv"] Jan 23 16:48:17 crc kubenswrapper[4718]: I0123 16:48:17.153871 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bd0aeb0-24b5-49c2-96cd-83f9defa05e1" path="/var/lib/kubelet/pods/0bd0aeb0-24b5-49c2-96cd-83f9defa05e1/volumes" Jan 23 16:48:21 crc kubenswrapper[4718]: I0123 16:48:21.140768 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:48:21 crc kubenswrapper[4718]: E0123 16:48:21.141887 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:48:34 crc kubenswrapper[4718]: I0123 16:48:34.140691 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:48:34 crc kubenswrapper[4718]: E0123 16:48:34.142765 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:48:45 crc kubenswrapper[4718]: I0123 16:48:45.141064 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:48:45 crc kubenswrapper[4718]: E0123 16:48:45.141830 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:48:55 crc kubenswrapper[4718]: I0123 16:48:55.046784 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-s285q"] Jan 23 16:48:55 crc kubenswrapper[4718]: I0123 16:48:55.071412 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-s285q"] Jan 23 16:48:55 crc kubenswrapper[4718]: I0123 16:48:55.151603 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf77eff7-55fd-45ad-8e28-7c437167cc0b" path="/var/lib/kubelet/pods/cf77eff7-55fd-45ad-8e28-7c437167cc0b/volumes" Jan 23 16:48:57 crc kubenswrapper[4718]: I0123 16:48:57.141292 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:48:57 crc kubenswrapper[4718]: E0123 16:48:57.142311 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:49:00 crc kubenswrapper[4718]: I0123 16:49:00.630876 4718 scope.go:117] "RemoveContainer" containerID="7354c255cfcfe0b314bf46cf72227ebcdcc658d48d95d38ecc2feccfe4f50964" Jan 23 16:49:00 crc kubenswrapper[4718]: I0123 16:49:00.673588 4718 scope.go:117] "RemoveContainer" containerID="3dc19c218f202ae57b0754158be96e3643674a265d0cd8429c4401ce2134dc5d" Jan 23 16:49:08 crc kubenswrapper[4718]: I0123 16:49:08.035268 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-bthbv"] Jan 23 16:49:08 crc kubenswrapper[4718]: I0123 16:49:08.047727 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-8rn8k"] Jan 23 16:49:08 crc kubenswrapper[4718]: I0123 16:49:08.061261 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-8rn8k"] Jan 23 16:49:08 crc kubenswrapper[4718]: I0123 16:49:08.073499 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-bthbv"] Jan 23 16:49:09 crc kubenswrapper[4718]: I0123 16:49:09.041529 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9952-account-create-update-x5sd9"] Jan 23 16:49:09 crc kubenswrapper[4718]: I0123 16:49:09.056314 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-573f-account-create-update-ktn8s"] Jan 23 16:49:09 crc kubenswrapper[4718]: I0123 16:49:09.066640 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9952-account-create-update-x5sd9"] Jan 23 16:49:09 crc kubenswrapper[4718]: I0123 16:49:09.078041 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-573f-account-create-update-ktn8s"] Jan 23 16:49:09 crc kubenswrapper[4718]: I0123 16:49:09.159788 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09" path="/var/lib/kubelet/pods/0c7b1f93-c166-4cd5-bfc6-5f2a0c94fb09/volumes" Jan 23 16:49:09 crc kubenswrapper[4718]: I0123 16:49:09.160551 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="328feadc-ef28-4714-acfc-a10826fb69f6" path="/var/lib/kubelet/pods/328feadc-ef28-4714-acfc-a10826fb69f6/volumes" Jan 23 16:49:09 crc kubenswrapper[4718]: I0123 16:49:09.161362 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab905731-cf34-4768-9cef-c35c3bed8f22" path="/var/lib/kubelet/pods/ab905731-cf34-4768-9cef-c35c3bed8f22/volumes" Jan 23 16:49:09 crc kubenswrapper[4718]: I0123 16:49:09.162284 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c80a9db0-876a-4ba3-a5ce-018220181097" path="/var/lib/kubelet/pods/c80a9db0-876a-4ba3-a5ce-018220181097/volumes" Jan 23 16:49:11 crc kubenswrapper[4718]: I0123 16:49:11.038580 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c3f8-account-create-update-pdfm4"] Jan 23 16:49:11 crc kubenswrapper[4718]: I0123 16:49:11.051500 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-kwhc9"] Jan 23 16:49:11 crc kubenswrapper[4718]: I0123 16:49:11.064384 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-kwhc9"] Jan 23 16:49:11 crc kubenswrapper[4718]: I0123 16:49:11.076611 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c3f8-account-create-update-pdfm4"] Jan 23 16:49:11 crc kubenswrapper[4718]: I0123 16:49:11.139835 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:49:11 crc kubenswrapper[4718]: E0123 16:49:11.140195 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:49:11 crc kubenswrapper[4718]: I0123 16:49:11.156699 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77678c25-65e5-449f-b905-c8732eb45518" path="/var/lib/kubelet/pods/77678c25-65e5-449f-b905-c8732eb45518/volumes" Jan 23 16:49:11 crc kubenswrapper[4718]: I0123 16:49:11.157424 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d454d95a-f920-46da-9b19-70d3b987d808" path="/var/lib/kubelet/pods/d454d95a-f920-46da-9b19-70d3b987d808/volumes" Jan 23 16:49:25 crc kubenswrapper[4718]: I0123 16:49:25.141734 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:49:25 crc kubenswrapper[4718]: E0123 16:49:25.142649 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:49:40 crc kubenswrapper[4718]: I0123 16:49:40.140709 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:49:40 crc kubenswrapper[4718]: E0123 16:49:40.141494 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:49:45 crc kubenswrapper[4718]: I0123 16:49:45.056104 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-g6vj6"] Jan 23 16:49:45 crc kubenswrapper[4718]: I0123 16:49:45.072681 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-g6vj6"] Jan 23 16:49:45 crc kubenswrapper[4718]: I0123 16:49:45.157090 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="506a04e5-5008-4415-9afd-6ccc208f9dd4" path="/var/lib/kubelet/pods/506a04e5-5008-4415-9afd-6ccc208f9dd4/volumes" Jan 23 16:49:53 crc kubenswrapper[4718]: I0123 16:49:53.140560 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:49:53 crc kubenswrapper[4718]: E0123 16:49:53.141416 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:50:00 crc kubenswrapper[4718]: I0123 16:50:00.776119 4718 scope.go:117] "RemoveContainer" containerID="f0f3c365ac1b8f78324b14087daa98f29f15728d5122e41c8ca5fc387de4da59" Jan 23 16:50:00 crc kubenswrapper[4718]: I0123 16:50:00.810358 4718 scope.go:117] "RemoveContainer" containerID="17782fee830311c6818e7eb8c4ef49e5eb3d1405b713d72abcd6710d06d41fb8" Jan 23 16:50:00 crc kubenswrapper[4718]: I0123 16:50:00.881332 4718 scope.go:117] "RemoveContainer" containerID="b169b4084d813f4c4209f67c653fe48d6df29ce0761b44ec50ea030a6dda81cf" Jan 23 16:50:00 crc kubenswrapper[4718]: I0123 16:50:00.924863 4718 scope.go:117] "RemoveContainer" containerID="a2cea0a7c56463d1d2340a33d4783bd8bb88c0c7d0cbd8b5c0c7a6c7ff8d7048" Jan 23 16:50:00 crc kubenswrapper[4718]: I0123 16:50:00.988838 4718 scope.go:117] "RemoveContainer" containerID="4f7363a6dee84e8b239dd8490dfbb356900138d3043c207ccdf11aa1357e39b3" Jan 23 16:50:01 crc kubenswrapper[4718]: I0123 16:50:01.034674 4718 scope.go:117] "RemoveContainer" containerID="5f7e29dba2fd77eff5cfd625679869ea9a88ea76ee7468bd11a247455e817879" Jan 23 16:50:01 crc kubenswrapper[4718]: I0123 16:50:01.091575 4718 scope.go:117] "RemoveContainer" containerID="d6f3f061df6fd12443d3fb9cecbc496a4a5aea0202f56938da22529b2903a4cb" Jan 23 16:50:06 crc kubenswrapper[4718]: I0123 16:50:06.140176 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:50:07 crc kubenswrapper[4718]: I0123 16:50:07.026150 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"cd13cf13fbb6dcb61c95d462a30fb29710e34c4db432a129f54b3375b2b755c4"} Jan 23 16:50:11 crc kubenswrapper[4718]: I0123 16:50:11.051167 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-lm8pg"] Jan 23 16:50:11 crc kubenswrapper[4718]: I0123 16:50:11.064919 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-lm8pg"] Jan 23 16:50:11 crc kubenswrapper[4718]: I0123 16:50:11.153089 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41125ff2-1f34-4be1-a9f1-97c9a8987dba" path="/var/lib/kubelet/pods/41125ff2-1f34-4be1-a9f1-97c9a8987dba/volumes" Jan 23 16:50:12 crc kubenswrapper[4718]: I0123 16:50:12.031170 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-03bd-account-create-update-9h26j"] Jan 23 16:50:12 crc kubenswrapper[4718]: I0123 16:50:12.041832 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-03bd-account-create-update-9h26j"] Jan 23 16:50:13 crc kubenswrapper[4718]: I0123 16:50:13.030114 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-c7phc"] Jan 23 16:50:13 crc kubenswrapper[4718]: I0123 16:50:13.040495 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-c7phc"] Jan 23 16:50:13 crc kubenswrapper[4718]: I0123 16:50:13.154433 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6465851-021f-4d4b-8f64-aec0b8be2cee" path="/var/lib/kubelet/pods/e6465851-021f-4d4b-8f64-aec0b8be2cee/volumes" Jan 23 16:50:13 crc kubenswrapper[4718]: I0123 16:50:13.155146 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8db3c02-bc86-4e35-8cce-3fba179bfe88" path="/var/lib/kubelet/pods/f8db3c02-bc86-4e35-8cce-3fba179bfe88/volumes" Jan 23 16:50:16 crc kubenswrapper[4718]: I0123 16:50:16.042838 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-98zcf"] Jan 23 16:50:16 crc kubenswrapper[4718]: I0123 16:50:16.071910 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-98zcf"] Jan 23 16:50:17 crc kubenswrapper[4718]: I0123 16:50:17.153233 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf" path="/var/lib/kubelet/pods/2261bb18-b7c4-4aeb-81c9-59a1f1e37dcf/volumes" Jan 23 16:50:35 crc kubenswrapper[4718]: I0123 16:50:35.327040 4718 generic.go:334] "Generic (PLEG): container finished" podID="c94d0e96-185e-4f09-bb48-9fb2e6874fec" containerID="7d28df88a7136841eec59ce0686a066f02e30f494a44c5ccaf1e184bed880008" exitCode=0 Jan 23 16:50:35 crc kubenswrapper[4718]: I0123 16:50:35.327469 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6" event={"ID":"c94d0e96-185e-4f09-bb48-9fb2e6874fec","Type":"ContainerDied","Data":"7d28df88a7136841eec59ce0686a066f02e30f494a44c5ccaf1e184bed880008"} Jan 23 16:50:36 crc kubenswrapper[4718]: I0123 16:50:36.878529 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6" Jan 23 16:50:36 crc kubenswrapper[4718]: I0123 16:50:36.961327 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c94d0e96-185e-4f09-bb48-9fb2e6874fec-ssh-key-openstack-edpm-ipam\") pod \"c94d0e96-185e-4f09-bb48-9fb2e6874fec\" (UID: \"c94d0e96-185e-4f09-bb48-9fb2e6874fec\") " Jan 23 16:50:36 crc kubenswrapper[4718]: I0123 16:50:36.961569 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgh29\" (UniqueName: \"kubernetes.io/projected/c94d0e96-185e-4f09-bb48-9fb2e6874fec-kube-api-access-hgh29\") pod \"c94d0e96-185e-4f09-bb48-9fb2e6874fec\" (UID: \"c94d0e96-185e-4f09-bb48-9fb2e6874fec\") " Jan 23 16:50:36 crc kubenswrapper[4718]: I0123 16:50:36.961617 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c94d0e96-185e-4f09-bb48-9fb2e6874fec-inventory\") pod \"c94d0e96-185e-4f09-bb48-9fb2e6874fec\" (UID: \"c94d0e96-185e-4f09-bb48-9fb2e6874fec\") " Jan 23 16:50:36 crc kubenswrapper[4718]: I0123 16:50:36.968139 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c94d0e96-185e-4f09-bb48-9fb2e6874fec-kube-api-access-hgh29" (OuterVolumeSpecName: "kube-api-access-hgh29") pod "c94d0e96-185e-4f09-bb48-9fb2e6874fec" (UID: "c94d0e96-185e-4f09-bb48-9fb2e6874fec"). InnerVolumeSpecName "kube-api-access-hgh29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:50:36 crc kubenswrapper[4718]: I0123 16:50:36.998591 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94d0e96-185e-4f09-bb48-9fb2e6874fec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c94d0e96-185e-4f09-bb48-9fb2e6874fec" (UID: "c94d0e96-185e-4f09-bb48-9fb2e6874fec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.006971 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94d0e96-185e-4f09-bb48-9fb2e6874fec-inventory" (OuterVolumeSpecName: "inventory") pod "c94d0e96-185e-4f09-bb48-9fb2e6874fec" (UID: "c94d0e96-185e-4f09-bb48-9fb2e6874fec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.066402 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c94d0e96-185e-4f09-bb48-9fb2e6874fec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.067601 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgh29\" (UniqueName: \"kubernetes.io/projected/c94d0e96-185e-4f09-bb48-9fb2e6874fec-kube-api-access-hgh29\") on node \"crc\" DevicePath \"\"" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.067614 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c94d0e96-185e-4f09-bb48-9fb2e6874fec-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.350736 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6" event={"ID":"c94d0e96-185e-4f09-bb48-9fb2e6874fec","Type":"ContainerDied","Data":"95d5783c3a80a81be7d58266d640d7fd6f1095332e72c16c0a54f25ddb48eae4"} Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.350778 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95d5783c3a80a81be7d58266d640d7fd6f1095332e72c16c0a54f25ddb48eae4" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.350816 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.448347 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79"] Jan 23 16:50:37 crc kubenswrapper[4718]: E0123 16:50:37.449080 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c94d0e96-185e-4f09-bb48-9fb2e6874fec" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.449107 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c94d0e96-185e-4f09-bb48-9fb2e6874fec" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.449413 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c94d0e96-185e-4f09-bb48-9fb2e6874fec" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.450438 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.453442 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.453832 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.454340 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.457416 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.470129 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79"] Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.583174 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a410865c-527d-4070-8dcd-d4ef16f73c82-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s9v79\" (UID: \"a410865c-527d-4070-8dcd-d4ef16f73c82\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.583266 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h2s8\" (UniqueName: \"kubernetes.io/projected/a410865c-527d-4070-8dcd-d4ef16f73c82-kube-api-access-8h2s8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s9v79\" (UID: \"a410865c-527d-4070-8dcd-d4ef16f73c82\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.583343 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a410865c-527d-4070-8dcd-d4ef16f73c82-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s9v79\" (UID: \"a410865c-527d-4070-8dcd-d4ef16f73c82\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.685667 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a410865c-527d-4070-8dcd-d4ef16f73c82-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s9v79\" (UID: \"a410865c-527d-4070-8dcd-d4ef16f73c82\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.685767 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h2s8\" (UniqueName: \"kubernetes.io/projected/a410865c-527d-4070-8dcd-d4ef16f73c82-kube-api-access-8h2s8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s9v79\" (UID: \"a410865c-527d-4070-8dcd-d4ef16f73c82\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.685875 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a410865c-527d-4070-8dcd-d4ef16f73c82-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s9v79\" (UID: \"a410865c-527d-4070-8dcd-d4ef16f73c82\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.692391 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a410865c-527d-4070-8dcd-d4ef16f73c82-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s9v79\" (UID: \"a410865c-527d-4070-8dcd-d4ef16f73c82\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.693733 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a410865c-527d-4070-8dcd-d4ef16f73c82-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s9v79\" (UID: \"a410865c-527d-4070-8dcd-d4ef16f73c82\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.702738 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h2s8\" (UniqueName: \"kubernetes.io/projected/a410865c-527d-4070-8dcd-d4ef16f73c82-kube-api-access-8h2s8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s9v79\" (UID: \"a410865c-527d-4070-8dcd-d4ef16f73c82\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79" Jan 23 16:50:37 crc kubenswrapper[4718]: I0123 16:50:37.771059 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79" Jan 23 16:50:38 crc kubenswrapper[4718]: I0123 16:50:38.334270 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79"] Jan 23 16:50:38 crc kubenswrapper[4718]: I0123 16:50:38.362714 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79" event={"ID":"a410865c-527d-4070-8dcd-d4ef16f73c82","Type":"ContainerStarted","Data":"df22393d6190b356304ceb603354e633ff117b718e6c20aa5931242ac9af2381"} Jan 23 16:50:39 crc kubenswrapper[4718]: I0123 16:50:39.373976 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79" event={"ID":"a410865c-527d-4070-8dcd-d4ef16f73c82","Type":"ContainerStarted","Data":"81d93946c8d3f419ed58bafb2405d02f87517dba936009d921d31740d9f622d6"} Jan 23 16:50:39 crc kubenswrapper[4718]: I0123 16:50:39.400559 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79" podStartSLOduration=1.789769568 podStartE2EDuration="2.400540968s" podCreationTimestamp="2026-01-23 16:50:37 +0000 UTC" firstStartedPulling="2026-01-23 16:50:38.332735303 +0000 UTC m=+2039.479977294" lastFinishedPulling="2026-01-23 16:50:38.943506703 +0000 UTC m=+2040.090748694" observedRunningTime="2026-01-23 16:50:39.39876822 +0000 UTC m=+2040.546010221" watchObservedRunningTime="2026-01-23 16:50:39.400540968 +0000 UTC m=+2040.547782959" Jan 23 16:50:59 crc kubenswrapper[4718]: I0123 16:50:59.062199 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-bpll2"] Jan 23 16:50:59 crc kubenswrapper[4718]: I0123 16:50:59.071461 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-bpll2"] Jan 23 16:50:59 crc kubenswrapper[4718]: I0123 16:50:59.151833 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eeffb69-5654-4e1d-ae21-580a5a235246" path="/var/lib/kubelet/pods/9eeffb69-5654-4e1d-ae21-580a5a235246/volumes" Jan 23 16:51:01 crc kubenswrapper[4718]: I0123 16:51:01.266378 4718 scope.go:117] "RemoveContainer" containerID="74d28042ae3100b01691674a2f9167b7ddfdcf96c1f35077b06301e619483e51" Jan 23 16:51:01 crc kubenswrapper[4718]: I0123 16:51:01.294954 4718 scope.go:117] "RemoveContainer" containerID="5c6cf58a6bcc4727c1466a1749d443da61ff05bd27fde8a17c470af5d757cc7f" Jan 23 16:51:01 crc kubenswrapper[4718]: I0123 16:51:01.368205 4718 scope.go:117] "RemoveContainer" containerID="515c04a29fef45d29dd5ebf358fb085926754f858e699d3ad6c424457ab53fc6" Jan 23 16:51:01 crc kubenswrapper[4718]: I0123 16:51:01.430337 4718 scope.go:117] "RemoveContainer" containerID="5fe5268e1fc895fd279ed08c0f2fad6d958c47574bda524ef22f26b709867bae" Jan 23 16:51:01 crc kubenswrapper[4718]: I0123 16:51:01.485860 4718 scope.go:117] "RemoveContainer" containerID="a3401881c188eb0fac2e24bc3bbffde120e02d6e13d850e3f01e2c95162b65b4" Jan 23 16:51:37 crc kubenswrapper[4718]: I0123 16:51:37.471676 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g9rv4"] Jan 23 16:51:37 crc kubenswrapper[4718]: I0123 16:51:37.474920 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9rv4" Jan 23 16:51:37 crc kubenswrapper[4718]: I0123 16:51:37.483543 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g9rv4"] Jan 23 16:51:37 crc kubenswrapper[4718]: I0123 16:51:37.578328 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83-utilities\") pod \"redhat-operators-g9rv4\" (UID: \"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83\") " pod="openshift-marketplace/redhat-operators-g9rv4" Jan 23 16:51:37 crc kubenswrapper[4718]: I0123 16:51:37.578535 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83-catalog-content\") pod \"redhat-operators-g9rv4\" (UID: \"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83\") " pod="openshift-marketplace/redhat-operators-g9rv4" Jan 23 16:51:37 crc kubenswrapper[4718]: I0123 16:51:37.578580 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctf2m\" (UniqueName: \"kubernetes.io/projected/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83-kube-api-access-ctf2m\") pod \"redhat-operators-g9rv4\" (UID: \"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83\") " pod="openshift-marketplace/redhat-operators-g9rv4" Jan 23 16:51:37 crc kubenswrapper[4718]: I0123 16:51:37.681107 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83-utilities\") pod \"redhat-operators-g9rv4\" (UID: \"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83\") " pod="openshift-marketplace/redhat-operators-g9rv4" Jan 23 16:51:37 crc kubenswrapper[4718]: I0123 16:51:37.681301 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83-catalog-content\") pod \"redhat-operators-g9rv4\" (UID: \"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83\") " pod="openshift-marketplace/redhat-operators-g9rv4" Jan 23 16:51:37 crc kubenswrapper[4718]: I0123 16:51:37.681336 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctf2m\" (UniqueName: \"kubernetes.io/projected/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83-kube-api-access-ctf2m\") pod \"redhat-operators-g9rv4\" (UID: \"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83\") " pod="openshift-marketplace/redhat-operators-g9rv4" Jan 23 16:51:37 crc kubenswrapper[4718]: I0123 16:51:37.682135 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83-utilities\") pod \"redhat-operators-g9rv4\" (UID: \"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83\") " pod="openshift-marketplace/redhat-operators-g9rv4" Jan 23 16:51:37 crc kubenswrapper[4718]: I0123 16:51:37.682349 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83-catalog-content\") pod \"redhat-operators-g9rv4\" (UID: \"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83\") " pod="openshift-marketplace/redhat-operators-g9rv4" Jan 23 16:51:37 crc kubenswrapper[4718]: I0123 16:51:37.707850 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctf2m\" (UniqueName: \"kubernetes.io/projected/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83-kube-api-access-ctf2m\") pod \"redhat-operators-g9rv4\" (UID: \"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83\") " pod="openshift-marketplace/redhat-operators-g9rv4" Jan 23 16:51:37 crc kubenswrapper[4718]: I0123 16:51:37.812625 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9rv4" Jan 23 16:51:38 crc kubenswrapper[4718]: I0123 16:51:38.372185 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g9rv4"] Jan 23 16:51:39 crc kubenswrapper[4718]: I0123 16:51:39.021030 4718 generic.go:334] "Generic (PLEG): container finished" podID="fd6fc9ba-3339-49ad-bfe6-c60fe7370f83" containerID="95ce88eb11293ce45c2b7702a2e3d6bd6473d2f5badbd638616078c5e81cd8a9" exitCode=0 Jan 23 16:51:39 crc kubenswrapper[4718]: I0123 16:51:39.021132 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9rv4" event={"ID":"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83","Type":"ContainerDied","Data":"95ce88eb11293ce45c2b7702a2e3d6bd6473d2f5badbd638616078c5e81cd8a9"} Jan 23 16:51:39 crc kubenswrapper[4718]: I0123 16:51:39.022916 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9rv4" event={"ID":"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83","Type":"ContainerStarted","Data":"41cfd4659d059337219ab433a97e1ea8fcd817d134bb5615beadf380cbd3665d"} Jan 23 16:51:40 crc kubenswrapper[4718]: I0123 16:51:40.040293 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9rv4" event={"ID":"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83","Type":"ContainerStarted","Data":"b15bb531185e1483f77ef586dbf051f088a8a4f4c3c5cec34a737044794e86b3"} Jan 23 16:51:44 crc kubenswrapper[4718]: I0123 16:51:44.082860 4718 generic.go:334] "Generic (PLEG): container finished" podID="fd6fc9ba-3339-49ad-bfe6-c60fe7370f83" containerID="b15bb531185e1483f77ef586dbf051f088a8a4f4c3c5cec34a737044794e86b3" exitCode=0 Jan 23 16:51:44 crc kubenswrapper[4718]: I0123 16:51:44.083390 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9rv4" event={"ID":"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83","Type":"ContainerDied","Data":"b15bb531185e1483f77ef586dbf051f088a8a4f4c3c5cec34a737044794e86b3"} Jan 23 16:51:45 crc kubenswrapper[4718]: I0123 16:51:45.094730 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9rv4" event={"ID":"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83","Type":"ContainerStarted","Data":"93ba53f53aeed3d102a5ba3669d0eff0cae260bf0fb03fd22beb7062c0c37f54"} Jan 23 16:51:45 crc kubenswrapper[4718]: I0123 16:51:45.119451 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g9rv4" podStartSLOduration=2.431155097 podStartE2EDuration="8.119431713s" podCreationTimestamp="2026-01-23 16:51:37 +0000 UTC" firstStartedPulling="2026-01-23 16:51:39.022926315 +0000 UTC m=+2100.170168306" lastFinishedPulling="2026-01-23 16:51:44.711202931 +0000 UTC m=+2105.858444922" observedRunningTime="2026-01-23 16:51:45.110407469 +0000 UTC m=+2106.257649470" watchObservedRunningTime="2026-01-23 16:51:45.119431713 +0000 UTC m=+2106.266673704" Jan 23 16:51:47 crc kubenswrapper[4718]: I0123 16:51:47.812970 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g9rv4" Jan 23 16:51:47 crc kubenswrapper[4718]: I0123 16:51:47.813792 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g9rv4" Jan 23 16:51:48 crc kubenswrapper[4718]: I0123 16:51:48.868734 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g9rv4" podUID="fd6fc9ba-3339-49ad-bfe6-c60fe7370f83" containerName="registry-server" probeResult="failure" output=< Jan 23 16:51:48 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Jan 23 16:51:48 crc kubenswrapper[4718]: > Jan 23 16:51:56 crc kubenswrapper[4718]: E0123 16:51:56.356516 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda410865c_527d_4070_8dcd_d4ef16f73c82.slice/crio-conmon-81d93946c8d3f419ed58bafb2405d02f87517dba936009d921d31740d9f622d6.scope\": RecentStats: unable to find data in memory cache]" Jan 23 16:51:57 crc kubenswrapper[4718]: I0123 16:51:57.201113 4718 generic.go:334] "Generic (PLEG): container finished" podID="a410865c-527d-4070-8dcd-d4ef16f73c82" containerID="81d93946c8d3f419ed58bafb2405d02f87517dba936009d921d31740d9f622d6" exitCode=0 Jan 23 16:51:57 crc kubenswrapper[4718]: I0123 16:51:57.201208 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79" event={"ID":"a410865c-527d-4070-8dcd-d4ef16f73c82","Type":"ContainerDied","Data":"81d93946c8d3f419ed58bafb2405d02f87517dba936009d921d31740d9f622d6"} Jan 23 16:51:57 crc kubenswrapper[4718]: I0123 16:51:57.862962 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g9rv4" Jan 23 16:51:57 crc kubenswrapper[4718]: I0123 16:51:57.927062 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g9rv4" Jan 23 16:51:58 crc kubenswrapper[4718]: I0123 16:51:58.114448 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g9rv4"] Jan 23 16:51:58 crc kubenswrapper[4718]: I0123 16:51:58.717512 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79" Jan 23 16:51:58 crc kubenswrapper[4718]: I0123 16:51:58.827774 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a410865c-527d-4070-8dcd-d4ef16f73c82-inventory\") pod \"a410865c-527d-4070-8dcd-d4ef16f73c82\" (UID: \"a410865c-527d-4070-8dcd-d4ef16f73c82\") " Jan 23 16:51:58 crc kubenswrapper[4718]: I0123 16:51:58.827995 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a410865c-527d-4070-8dcd-d4ef16f73c82-ssh-key-openstack-edpm-ipam\") pod \"a410865c-527d-4070-8dcd-d4ef16f73c82\" (UID: \"a410865c-527d-4070-8dcd-d4ef16f73c82\") " Jan 23 16:51:58 crc kubenswrapper[4718]: I0123 16:51:58.828058 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h2s8\" (UniqueName: \"kubernetes.io/projected/a410865c-527d-4070-8dcd-d4ef16f73c82-kube-api-access-8h2s8\") pod \"a410865c-527d-4070-8dcd-d4ef16f73c82\" (UID: \"a410865c-527d-4070-8dcd-d4ef16f73c82\") " Jan 23 16:51:58 crc kubenswrapper[4718]: I0123 16:51:58.834413 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a410865c-527d-4070-8dcd-d4ef16f73c82-kube-api-access-8h2s8" (OuterVolumeSpecName: "kube-api-access-8h2s8") pod "a410865c-527d-4070-8dcd-d4ef16f73c82" (UID: "a410865c-527d-4070-8dcd-d4ef16f73c82"). InnerVolumeSpecName "kube-api-access-8h2s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:51:58 crc kubenswrapper[4718]: I0123 16:51:58.861954 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a410865c-527d-4070-8dcd-d4ef16f73c82-inventory" (OuterVolumeSpecName: "inventory") pod "a410865c-527d-4070-8dcd-d4ef16f73c82" (UID: "a410865c-527d-4070-8dcd-d4ef16f73c82"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:51:58 crc kubenswrapper[4718]: I0123 16:51:58.863268 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a410865c-527d-4070-8dcd-d4ef16f73c82-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a410865c-527d-4070-8dcd-d4ef16f73c82" (UID: "a410865c-527d-4070-8dcd-d4ef16f73c82"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:51:58 crc kubenswrapper[4718]: I0123 16:51:58.930930 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a410865c-527d-4070-8dcd-d4ef16f73c82-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 16:51:58 crc kubenswrapper[4718]: I0123 16:51:58.930967 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a410865c-527d-4070-8dcd-d4ef16f73c82-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 16:51:58 crc kubenswrapper[4718]: I0123 16:51:58.930983 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h2s8\" (UniqueName: \"kubernetes.io/projected/a410865c-527d-4070-8dcd-d4ef16f73c82-kube-api-access-8h2s8\") on node \"crc\" DevicePath \"\"" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.220894 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.220925 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9v79" event={"ID":"a410865c-527d-4070-8dcd-d4ef16f73c82","Type":"ContainerDied","Data":"df22393d6190b356304ceb603354e633ff117b718e6c20aa5931242ac9af2381"} Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.220972 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df22393d6190b356304ceb603354e633ff117b718e6c20aa5931242ac9af2381" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.221028 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g9rv4" podUID="fd6fc9ba-3339-49ad-bfe6-c60fe7370f83" containerName="registry-server" containerID="cri-o://93ba53f53aeed3d102a5ba3669d0eff0cae260bf0fb03fd22beb7062c0c37f54" gracePeriod=2 Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.319995 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll"] Jan 23 16:51:59 crc kubenswrapper[4718]: E0123 16:51:59.320677 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a410865c-527d-4070-8dcd-d4ef16f73c82" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.320702 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a410865c-527d-4070-8dcd-d4ef16f73c82" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.320979 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a410865c-527d-4070-8dcd-d4ef16f73c82" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.322027 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.324365 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.324715 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.324854 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.328452 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.335769 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll"] Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.442720 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25a0278c-a9ab-4c21-af05-e4fed25e299d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7jmll\" (UID: \"25a0278c-a9ab-4c21-af05-e4fed25e299d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.442893 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25a0278c-a9ab-4c21-af05-e4fed25e299d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7jmll\" (UID: \"25a0278c-a9ab-4c21-af05-e4fed25e299d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.442924 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdc4d\" (UniqueName: \"kubernetes.io/projected/25a0278c-a9ab-4c21-af05-e4fed25e299d-kube-api-access-hdc4d\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7jmll\" (UID: \"25a0278c-a9ab-4c21-af05-e4fed25e299d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.545532 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25a0278c-a9ab-4c21-af05-e4fed25e299d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7jmll\" (UID: \"25a0278c-a9ab-4c21-af05-e4fed25e299d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.545985 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25a0278c-a9ab-4c21-af05-e4fed25e299d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7jmll\" (UID: \"25a0278c-a9ab-4c21-af05-e4fed25e299d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.546021 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdc4d\" (UniqueName: \"kubernetes.io/projected/25a0278c-a9ab-4c21-af05-e4fed25e299d-kube-api-access-hdc4d\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7jmll\" (UID: \"25a0278c-a9ab-4c21-af05-e4fed25e299d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.551982 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25a0278c-a9ab-4c21-af05-e4fed25e299d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7jmll\" (UID: \"25a0278c-a9ab-4c21-af05-e4fed25e299d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.552675 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25a0278c-a9ab-4c21-af05-e4fed25e299d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7jmll\" (UID: \"25a0278c-a9ab-4c21-af05-e4fed25e299d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.569427 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdc4d\" (UniqueName: \"kubernetes.io/projected/25a0278c-a9ab-4c21-af05-e4fed25e299d-kube-api-access-hdc4d\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7jmll\" (UID: \"25a0278c-a9ab-4c21-af05-e4fed25e299d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.728566 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.906557 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9rv4" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.954581 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83-catalog-content\") pod \"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83\" (UID: \"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83\") " Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.954796 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83-utilities\") pod \"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83\" (UID: \"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83\") " Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.955050 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctf2m\" (UniqueName: \"kubernetes.io/projected/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83-kube-api-access-ctf2m\") pod \"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83\" (UID: \"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83\") " Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.955670 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83-utilities" (OuterVolumeSpecName: "utilities") pod "fd6fc9ba-3339-49ad-bfe6-c60fe7370f83" (UID: "fd6fc9ba-3339-49ad-bfe6-c60fe7370f83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.956364 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:51:59 crc kubenswrapper[4718]: I0123 16:51:59.960046 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83-kube-api-access-ctf2m" (OuterVolumeSpecName: "kube-api-access-ctf2m") pod "fd6fc9ba-3339-49ad-bfe6-c60fe7370f83" (UID: "fd6fc9ba-3339-49ad-bfe6-c60fe7370f83"). InnerVolumeSpecName "kube-api-access-ctf2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.059162 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctf2m\" (UniqueName: \"kubernetes.io/projected/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83-kube-api-access-ctf2m\") on node \"crc\" DevicePath \"\"" Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.247984 4718 generic.go:334] "Generic (PLEG): container finished" podID="fd6fc9ba-3339-49ad-bfe6-c60fe7370f83" containerID="93ba53f53aeed3d102a5ba3669d0eff0cae260bf0fb03fd22beb7062c0c37f54" exitCode=0 Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.248290 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9rv4" event={"ID":"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83","Type":"ContainerDied","Data":"93ba53f53aeed3d102a5ba3669d0eff0cae260bf0fb03fd22beb7062c0c37f54"} Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.271437 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9rv4" event={"ID":"fd6fc9ba-3339-49ad-bfe6-c60fe7370f83","Type":"ContainerDied","Data":"41cfd4659d059337219ab433a97e1ea8fcd817d134bb5615beadf380cbd3665d"} Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.251771 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9rv4" Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.273727 4718 scope.go:117] "RemoveContainer" containerID="93ba53f53aeed3d102a5ba3669d0eff0cae260bf0fb03fd22beb7062c0c37f54" Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.299651 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd6fc9ba-3339-49ad-bfe6-c60fe7370f83" (UID: "fd6fc9ba-3339-49ad-bfe6-c60fe7370f83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.367723 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.369820 4718 scope.go:117] "RemoveContainer" containerID="b15bb531185e1483f77ef586dbf051f088a8a4f4c3c5cec34a737044794e86b3" Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.478732 4718 scope.go:117] "RemoveContainer" containerID="95ce88eb11293ce45c2b7702a2e3d6bd6473d2f5badbd638616078c5e81cd8a9" Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.514482 4718 scope.go:117] "RemoveContainer" containerID="93ba53f53aeed3d102a5ba3669d0eff0cae260bf0fb03fd22beb7062c0c37f54" Jan 23 16:52:00 crc kubenswrapper[4718]: E0123 16:52:00.515000 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93ba53f53aeed3d102a5ba3669d0eff0cae260bf0fb03fd22beb7062c0c37f54\": container with ID starting with 93ba53f53aeed3d102a5ba3669d0eff0cae260bf0fb03fd22beb7062c0c37f54 not found: ID does not exist" containerID="93ba53f53aeed3d102a5ba3669d0eff0cae260bf0fb03fd22beb7062c0c37f54" Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.515065 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ba53f53aeed3d102a5ba3669d0eff0cae260bf0fb03fd22beb7062c0c37f54"} err="failed to get container status \"93ba53f53aeed3d102a5ba3669d0eff0cae260bf0fb03fd22beb7062c0c37f54\": rpc error: code = NotFound desc = could not find container \"93ba53f53aeed3d102a5ba3669d0eff0cae260bf0fb03fd22beb7062c0c37f54\": container with ID starting with 93ba53f53aeed3d102a5ba3669d0eff0cae260bf0fb03fd22beb7062c0c37f54 not found: ID does not exist" Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.515095 4718 scope.go:117] "RemoveContainer" containerID="b15bb531185e1483f77ef586dbf051f088a8a4f4c3c5cec34a737044794e86b3" Jan 23 16:52:00 crc kubenswrapper[4718]: E0123 16:52:00.515503 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b15bb531185e1483f77ef586dbf051f088a8a4f4c3c5cec34a737044794e86b3\": container with ID starting with b15bb531185e1483f77ef586dbf051f088a8a4f4c3c5cec34a737044794e86b3 not found: ID does not exist" containerID="b15bb531185e1483f77ef586dbf051f088a8a4f4c3c5cec34a737044794e86b3" Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.515546 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b15bb531185e1483f77ef586dbf051f088a8a4f4c3c5cec34a737044794e86b3"} err="failed to get container status \"b15bb531185e1483f77ef586dbf051f088a8a4f4c3c5cec34a737044794e86b3\": rpc error: code = NotFound desc = could not find container \"b15bb531185e1483f77ef586dbf051f088a8a4f4c3c5cec34a737044794e86b3\": container with ID starting with b15bb531185e1483f77ef586dbf051f088a8a4f4c3c5cec34a737044794e86b3 not found: ID does not exist" Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.515573 4718 scope.go:117] "RemoveContainer" containerID="95ce88eb11293ce45c2b7702a2e3d6bd6473d2f5badbd638616078c5e81cd8a9" Jan 23 16:52:00 crc kubenswrapper[4718]: E0123 16:52:00.516068 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95ce88eb11293ce45c2b7702a2e3d6bd6473d2f5badbd638616078c5e81cd8a9\": container with ID starting with 95ce88eb11293ce45c2b7702a2e3d6bd6473d2f5badbd638616078c5e81cd8a9 not found: ID does not exist" containerID="95ce88eb11293ce45c2b7702a2e3d6bd6473d2f5badbd638616078c5e81cd8a9" Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.516095 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95ce88eb11293ce45c2b7702a2e3d6bd6473d2f5badbd638616078c5e81cd8a9"} err="failed to get container status \"95ce88eb11293ce45c2b7702a2e3d6bd6473d2f5badbd638616078c5e81cd8a9\": rpc error: code = NotFound desc = could not find container \"95ce88eb11293ce45c2b7702a2e3d6bd6473d2f5badbd638616078c5e81cd8a9\": container with ID starting with 95ce88eb11293ce45c2b7702a2e3d6bd6473d2f5badbd638616078c5e81cd8a9 not found: ID does not exist" Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.552484 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll"] Jan 23 16:52:00 crc kubenswrapper[4718]: W0123 16:52:00.564367 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25a0278c_a9ab_4c21_af05_e4fed25e299d.slice/crio-f87022e8d98c21efeb334484d7262bda3a940f83158b9228ca43a7443676f92c WatchSource:0}: Error finding container f87022e8d98c21efeb334484d7262bda3a940f83158b9228ca43a7443676f92c: Status 404 returned error can't find the container with id f87022e8d98c21efeb334484d7262bda3a940f83158b9228ca43a7443676f92c Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.665955 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g9rv4"] Jan 23 16:52:00 crc kubenswrapper[4718]: I0123 16:52:00.683810 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g9rv4"] Jan 23 16:52:01 crc kubenswrapper[4718]: I0123 16:52:01.162122 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd6fc9ba-3339-49ad-bfe6-c60fe7370f83" path="/var/lib/kubelet/pods/fd6fc9ba-3339-49ad-bfe6-c60fe7370f83/volumes" Jan 23 16:52:01 crc kubenswrapper[4718]: I0123 16:52:01.267388 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll" event={"ID":"25a0278c-a9ab-4c21-af05-e4fed25e299d","Type":"ContainerStarted","Data":"f87022e8d98c21efeb334484d7262bda3a940f83158b9228ca43a7443676f92c"} Jan 23 16:52:02 crc kubenswrapper[4718]: I0123 16:52:02.280471 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll" event={"ID":"25a0278c-a9ab-4c21-af05-e4fed25e299d","Type":"ContainerStarted","Data":"05271fc53aa4e65e8e2b8b36e20ceca163fd9048cdd865e7552c0a1400393e61"} Jan 23 16:52:02 crc kubenswrapper[4718]: I0123 16:52:02.303227 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll" podStartSLOduration=2.84200439 podStartE2EDuration="3.303200917s" podCreationTimestamp="2026-01-23 16:51:59 +0000 UTC" firstStartedPulling="2026-01-23 16:52:00.567046332 +0000 UTC m=+2121.714288323" lastFinishedPulling="2026-01-23 16:52:01.028242859 +0000 UTC m=+2122.175484850" observedRunningTime="2026-01-23 16:52:02.296079265 +0000 UTC m=+2123.443321266" watchObservedRunningTime="2026-01-23 16:52:02.303200917 +0000 UTC m=+2123.450442908" Jan 23 16:52:05 crc kubenswrapper[4718]: I0123 16:52:05.460491 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jklmg"] Jan 23 16:52:05 crc kubenswrapper[4718]: E0123 16:52:05.462978 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6fc9ba-3339-49ad-bfe6-c60fe7370f83" containerName="registry-server" Jan 23 16:52:05 crc kubenswrapper[4718]: I0123 16:52:05.463087 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6fc9ba-3339-49ad-bfe6-c60fe7370f83" containerName="registry-server" Jan 23 16:52:05 crc kubenswrapper[4718]: E0123 16:52:05.463208 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6fc9ba-3339-49ad-bfe6-c60fe7370f83" containerName="extract-utilities" Jan 23 16:52:05 crc kubenswrapper[4718]: I0123 16:52:05.463270 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6fc9ba-3339-49ad-bfe6-c60fe7370f83" containerName="extract-utilities" Jan 23 16:52:05 crc kubenswrapper[4718]: E0123 16:52:05.463516 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6fc9ba-3339-49ad-bfe6-c60fe7370f83" containerName="extract-content" Jan 23 16:52:05 crc kubenswrapper[4718]: I0123 16:52:05.463590 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6fc9ba-3339-49ad-bfe6-c60fe7370f83" containerName="extract-content" Jan 23 16:52:05 crc kubenswrapper[4718]: I0123 16:52:05.463914 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd6fc9ba-3339-49ad-bfe6-c60fe7370f83" containerName="registry-server" Jan 23 16:52:05 crc kubenswrapper[4718]: I0123 16:52:05.465711 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jklmg" Jan 23 16:52:05 crc kubenswrapper[4718]: I0123 16:52:05.489344 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jklmg"] Jan 23 16:52:05 crc kubenswrapper[4718]: I0123 16:52:05.507339 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f9ae84e-0afe-4ee6-881d-bd3b68569b85-catalog-content\") pod \"certified-operators-jklmg\" (UID: \"5f9ae84e-0afe-4ee6-881d-bd3b68569b85\") " pod="openshift-marketplace/certified-operators-jklmg" Jan 23 16:52:05 crc kubenswrapper[4718]: I0123 16:52:05.507389 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxzlr\" (UniqueName: \"kubernetes.io/projected/5f9ae84e-0afe-4ee6-881d-bd3b68569b85-kube-api-access-cxzlr\") pod \"certified-operators-jklmg\" (UID: \"5f9ae84e-0afe-4ee6-881d-bd3b68569b85\") " pod="openshift-marketplace/certified-operators-jklmg" Jan 23 16:52:05 crc kubenswrapper[4718]: I0123 16:52:05.507780 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f9ae84e-0afe-4ee6-881d-bd3b68569b85-utilities\") pod \"certified-operators-jklmg\" (UID: \"5f9ae84e-0afe-4ee6-881d-bd3b68569b85\") " pod="openshift-marketplace/certified-operators-jklmg" Jan 23 16:52:05 crc kubenswrapper[4718]: I0123 16:52:05.609744 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f9ae84e-0afe-4ee6-881d-bd3b68569b85-utilities\") pod \"certified-operators-jklmg\" (UID: \"5f9ae84e-0afe-4ee6-881d-bd3b68569b85\") " pod="openshift-marketplace/certified-operators-jklmg" Jan 23 16:52:05 crc kubenswrapper[4718]: I0123 16:52:05.609890 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f9ae84e-0afe-4ee6-881d-bd3b68569b85-catalog-content\") pod \"certified-operators-jklmg\" (UID: \"5f9ae84e-0afe-4ee6-881d-bd3b68569b85\") " pod="openshift-marketplace/certified-operators-jklmg" Jan 23 16:52:05 crc kubenswrapper[4718]: I0123 16:52:05.609918 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxzlr\" (UniqueName: \"kubernetes.io/projected/5f9ae84e-0afe-4ee6-881d-bd3b68569b85-kube-api-access-cxzlr\") pod \"certified-operators-jklmg\" (UID: \"5f9ae84e-0afe-4ee6-881d-bd3b68569b85\") " pod="openshift-marketplace/certified-operators-jklmg" Jan 23 16:52:05 crc kubenswrapper[4718]: I0123 16:52:05.610359 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f9ae84e-0afe-4ee6-881d-bd3b68569b85-utilities\") pod \"certified-operators-jklmg\" (UID: \"5f9ae84e-0afe-4ee6-881d-bd3b68569b85\") " pod="openshift-marketplace/certified-operators-jklmg" Jan 23 16:52:05 crc kubenswrapper[4718]: I0123 16:52:05.610423 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f9ae84e-0afe-4ee6-881d-bd3b68569b85-catalog-content\") pod \"certified-operators-jklmg\" (UID: \"5f9ae84e-0afe-4ee6-881d-bd3b68569b85\") " pod="openshift-marketplace/certified-operators-jklmg" Jan 23 16:52:05 crc kubenswrapper[4718]: I0123 16:52:05.634454 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxzlr\" (UniqueName: \"kubernetes.io/projected/5f9ae84e-0afe-4ee6-881d-bd3b68569b85-kube-api-access-cxzlr\") pod \"certified-operators-jklmg\" (UID: \"5f9ae84e-0afe-4ee6-881d-bd3b68569b85\") " pod="openshift-marketplace/certified-operators-jklmg" Jan 23 16:52:05 crc kubenswrapper[4718]: I0123 16:52:05.788623 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jklmg" Jan 23 16:52:06 crc kubenswrapper[4718]: I0123 16:52:06.331097 4718 generic.go:334] "Generic (PLEG): container finished" podID="25a0278c-a9ab-4c21-af05-e4fed25e299d" containerID="05271fc53aa4e65e8e2b8b36e20ceca163fd9048cdd865e7552c0a1400393e61" exitCode=0 Jan 23 16:52:06 crc kubenswrapper[4718]: I0123 16:52:06.331299 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll" event={"ID":"25a0278c-a9ab-4c21-af05-e4fed25e299d","Type":"ContainerDied","Data":"05271fc53aa4e65e8e2b8b36e20ceca163fd9048cdd865e7552c0a1400393e61"} Jan 23 16:52:06 crc kubenswrapper[4718]: I0123 16:52:06.332227 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jklmg"] Jan 23 16:52:07 crc kubenswrapper[4718]: I0123 16:52:07.342744 4718 generic.go:334] "Generic (PLEG): container finished" podID="5f9ae84e-0afe-4ee6-881d-bd3b68569b85" containerID="5d31e12859e6ec02a98a8910683b18286f867b0b052e466793bc5c9f3d3b2de5" exitCode=0 Jan 23 16:52:07 crc kubenswrapper[4718]: I0123 16:52:07.343387 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jklmg" event={"ID":"5f9ae84e-0afe-4ee6-881d-bd3b68569b85","Type":"ContainerDied","Data":"5d31e12859e6ec02a98a8910683b18286f867b0b052e466793bc5c9f3d3b2de5"} Jan 23 16:52:07 crc kubenswrapper[4718]: I0123 16:52:07.343449 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jklmg" event={"ID":"5f9ae84e-0afe-4ee6-881d-bd3b68569b85","Type":"ContainerStarted","Data":"4893da0bdb293d994190ca5c89d736a79bd34136a8c0aa282151e9f2ee6a4f4b"} Jan 23 16:52:07 crc kubenswrapper[4718]: I0123 16:52:07.899593 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.075139 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdc4d\" (UniqueName: \"kubernetes.io/projected/25a0278c-a9ab-4c21-af05-e4fed25e299d-kube-api-access-hdc4d\") pod \"25a0278c-a9ab-4c21-af05-e4fed25e299d\" (UID: \"25a0278c-a9ab-4c21-af05-e4fed25e299d\") " Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.075512 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25a0278c-a9ab-4c21-af05-e4fed25e299d-inventory\") pod \"25a0278c-a9ab-4c21-af05-e4fed25e299d\" (UID: \"25a0278c-a9ab-4c21-af05-e4fed25e299d\") " Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.075586 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25a0278c-a9ab-4c21-af05-e4fed25e299d-ssh-key-openstack-edpm-ipam\") pod \"25a0278c-a9ab-4c21-af05-e4fed25e299d\" (UID: \"25a0278c-a9ab-4c21-af05-e4fed25e299d\") " Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.083974 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25a0278c-a9ab-4c21-af05-e4fed25e299d-kube-api-access-hdc4d" (OuterVolumeSpecName: "kube-api-access-hdc4d") pod "25a0278c-a9ab-4c21-af05-e4fed25e299d" (UID: "25a0278c-a9ab-4c21-af05-e4fed25e299d"). InnerVolumeSpecName "kube-api-access-hdc4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.138869 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25a0278c-a9ab-4c21-af05-e4fed25e299d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "25a0278c-a9ab-4c21-af05-e4fed25e299d" (UID: "25a0278c-a9ab-4c21-af05-e4fed25e299d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.148764 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25a0278c-a9ab-4c21-af05-e4fed25e299d-inventory" (OuterVolumeSpecName: "inventory") pod "25a0278c-a9ab-4c21-af05-e4fed25e299d" (UID: "25a0278c-a9ab-4c21-af05-e4fed25e299d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.179299 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25a0278c-a9ab-4c21-af05-e4fed25e299d-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.179338 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25a0278c-a9ab-4c21-af05-e4fed25e299d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.179351 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdc4d\" (UniqueName: \"kubernetes.io/projected/25a0278c-a9ab-4c21-af05-e4fed25e299d-kube-api-access-hdc4d\") on node \"crc\" DevicePath \"\"" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.353943 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jklmg" event={"ID":"5f9ae84e-0afe-4ee6-881d-bd3b68569b85","Type":"ContainerStarted","Data":"d22a27a07b991fc41e676326d4095efcc10260ae479c8e3298cbf949d7671b29"} Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.355418 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll" event={"ID":"25a0278c-a9ab-4c21-af05-e4fed25e299d","Type":"ContainerDied","Data":"f87022e8d98c21efeb334484d7262bda3a940f83158b9228ca43a7443676f92c"} Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.355453 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f87022e8d98c21efeb334484d7262bda3a940f83158b9228ca43a7443676f92c" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.355509 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7jmll" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.419883 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx"] Jan 23 16:52:08 crc kubenswrapper[4718]: E0123 16:52:08.420532 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a0278c-a9ab-4c21-af05-e4fed25e299d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.420552 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a0278c-a9ab-4c21-af05-e4fed25e299d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.420760 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="25a0278c-a9ab-4c21-af05-e4fed25e299d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.421736 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.426375 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.426568 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.426989 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.427506 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.439048 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx"] Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.486885 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cec28e23-37c2-4a27-872d-40cb7ad130c5-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kp4hx\" (UID: \"cec28e23-37c2-4a27-872d-40cb7ad130c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.487224 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9hpt\" (UniqueName: \"kubernetes.io/projected/cec28e23-37c2-4a27-872d-40cb7ad130c5-kube-api-access-d9hpt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kp4hx\" (UID: \"cec28e23-37c2-4a27-872d-40cb7ad130c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.487327 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cec28e23-37c2-4a27-872d-40cb7ad130c5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kp4hx\" (UID: \"cec28e23-37c2-4a27-872d-40cb7ad130c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.587996 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cec28e23-37c2-4a27-872d-40cb7ad130c5-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kp4hx\" (UID: \"cec28e23-37c2-4a27-872d-40cb7ad130c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.588069 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9hpt\" (UniqueName: \"kubernetes.io/projected/cec28e23-37c2-4a27-872d-40cb7ad130c5-kube-api-access-d9hpt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kp4hx\" (UID: \"cec28e23-37c2-4a27-872d-40cb7ad130c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.588145 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cec28e23-37c2-4a27-872d-40cb7ad130c5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kp4hx\" (UID: \"cec28e23-37c2-4a27-872d-40cb7ad130c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.591789 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cec28e23-37c2-4a27-872d-40cb7ad130c5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kp4hx\" (UID: \"cec28e23-37c2-4a27-872d-40cb7ad130c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.591798 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cec28e23-37c2-4a27-872d-40cb7ad130c5-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kp4hx\" (UID: \"cec28e23-37c2-4a27-872d-40cb7ad130c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.607541 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9hpt\" (UniqueName: \"kubernetes.io/projected/cec28e23-37c2-4a27-872d-40cb7ad130c5-kube-api-access-d9hpt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kp4hx\" (UID: \"cec28e23-37c2-4a27-872d-40cb7ad130c5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx" Jan 23 16:52:08 crc kubenswrapper[4718]: I0123 16:52:08.742019 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx" Jan 23 16:52:09 crc kubenswrapper[4718]: I0123 16:52:09.365562 4718 generic.go:334] "Generic (PLEG): container finished" podID="5f9ae84e-0afe-4ee6-881d-bd3b68569b85" containerID="d22a27a07b991fc41e676326d4095efcc10260ae479c8e3298cbf949d7671b29" exitCode=0 Jan 23 16:52:09 crc kubenswrapper[4718]: I0123 16:52:09.365877 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jklmg" event={"ID":"5f9ae84e-0afe-4ee6-881d-bd3b68569b85","Type":"ContainerDied","Data":"d22a27a07b991fc41e676326d4095efcc10260ae479c8e3298cbf949d7671b29"} Jan 23 16:52:09 crc kubenswrapper[4718]: W0123 16:52:09.400332 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcec28e23_37c2_4a27_872d_40cb7ad130c5.slice/crio-dd33e24226005839faeab886db151e575e2b621008e150c997ce920d26fa8eb4 WatchSource:0}: Error finding container dd33e24226005839faeab886db151e575e2b621008e150c997ce920d26fa8eb4: Status 404 returned error can't find the container with id dd33e24226005839faeab886db151e575e2b621008e150c997ce920d26fa8eb4 Jan 23 16:52:09 crc kubenswrapper[4718]: I0123 16:52:09.408228 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx"] Jan 23 16:52:10 crc kubenswrapper[4718]: I0123 16:52:10.387235 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jklmg" event={"ID":"5f9ae84e-0afe-4ee6-881d-bd3b68569b85","Type":"ContainerStarted","Data":"b365fe769064c6033e0987eea971694c488f29a6c57fa6e9ce6d1289b05086e4"} Jan 23 16:52:10 crc kubenswrapper[4718]: I0123 16:52:10.391020 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx" event={"ID":"cec28e23-37c2-4a27-872d-40cb7ad130c5","Type":"ContainerStarted","Data":"541f75c75bd6553929639824535214541d888b46cc3c08efb76197d4d9b53641"} Jan 23 16:52:10 crc kubenswrapper[4718]: I0123 16:52:10.391081 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx" event={"ID":"cec28e23-37c2-4a27-872d-40cb7ad130c5","Type":"ContainerStarted","Data":"dd33e24226005839faeab886db151e575e2b621008e150c997ce920d26fa8eb4"} Jan 23 16:52:10 crc kubenswrapper[4718]: I0123 16:52:10.427775 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jklmg" podStartSLOduration=2.987192749 podStartE2EDuration="5.427750771s" podCreationTimestamp="2026-01-23 16:52:05 +0000 UTC" firstStartedPulling="2026-01-23 16:52:07.34695085 +0000 UTC m=+2128.494192881" lastFinishedPulling="2026-01-23 16:52:09.787508912 +0000 UTC m=+2130.934750903" observedRunningTime="2026-01-23 16:52:10.411163741 +0000 UTC m=+2131.558405732" watchObservedRunningTime="2026-01-23 16:52:10.427750771 +0000 UTC m=+2131.574992762" Jan 23 16:52:10 crc kubenswrapper[4718]: I0123 16:52:10.434144 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx" podStartSLOduration=2.047383544 podStartE2EDuration="2.434127084s" podCreationTimestamp="2026-01-23 16:52:08 +0000 UTC" firstStartedPulling="2026-01-23 16:52:09.408935474 +0000 UTC m=+2130.556177465" lastFinishedPulling="2026-01-23 16:52:09.795679014 +0000 UTC m=+2130.942921005" observedRunningTime="2026-01-23 16:52:10.430140436 +0000 UTC m=+2131.577382427" watchObservedRunningTime="2026-01-23 16:52:10.434127084 +0000 UTC m=+2131.581369075" Jan 23 16:52:15 crc kubenswrapper[4718]: I0123 16:52:15.789075 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jklmg" Jan 23 16:52:15 crc kubenswrapper[4718]: I0123 16:52:15.789438 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jklmg" Jan 23 16:52:15 crc kubenswrapper[4718]: I0123 16:52:15.845311 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jklmg" Jan 23 16:52:16 crc kubenswrapper[4718]: I0123 16:52:16.507066 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jklmg" Jan 23 16:52:16 crc kubenswrapper[4718]: I0123 16:52:16.556234 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jklmg"] Jan 23 16:52:18 crc kubenswrapper[4718]: I0123 16:52:18.469026 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jklmg" podUID="5f9ae84e-0afe-4ee6-881d-bd3b68569b85" containerName="registry-server" containerID="cri-o://b365fe769064c6033e0987eea971694c488f29a6c57fa6e9ce6d1289b05086e4" gracePeriod=2 Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.071416 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jklmg" Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.182226 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxzlr\" (UniqueName: \"kubernetes.io/projected/5f9ae84e-0afe-4ee6-881d-bd3b68569b85-kube-api-access-cxzlr\") pod \"5f9ae84e-0afe-4ee6-881d-bd3b68569b85\" (UID: \"5f9ae84e-0afe-4ee6-881d-bd3b68569b85\") " Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.182346 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f9ae84e-0afe-4ee6-881d-bd3b68569b85-catalog-content\") pod \"5f9ae84e-0afe-4ee6-881d-bd3b68569b85\" (UID: \"5f9ae84e-0afe-4ee6-881d-bd3b68569b85\") " Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.182393 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f9ae84e-0afe-4ee6-881d-bd3b68569b85-utilities\") pod \"5f9ae84e-0afe-4ee6-881d-bd3b68569b85\" (UID: \"5f9ae84e-0afe-4ee6-881d-bd3b68569b85\") " Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.194883 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f9ae84e-0afe-4ee6-881d-bd3b68569b85-kube-api-access-cxzlr" (OuterVolumeSpecName: "kube-api-access-cxzlr") pod "5f9ae84e-0afe-4ee6-881d-bd3b68569b85" (UID: "5f9ae84e-0afe-4ee6-881d-bd3b68569b85"). InnerVolumeSpecName "kube-api-access-cxzlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.195697 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f9ae84e-0afe-4ee6-881d-bd3b68569b85-utilities" (OuterVolumeSpecName: "utilities") pod "5f9ae84e-0afe-4ee6-881d-bd3b68569b85" (UID: "5f9ae84e-0afe-4ee6-881d-bd3b68569b85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.213836 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxzlr\" (UniqueName: \"kubernetes.io/projected/5f9ae84e-0afe-4ee6-881d-bd3b68569b85-kube-api-access-cxzlr\") on node \"crc\" DevicePath \"\"" Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.213890 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f9ae84e-0afe-4ee6-881d-bd3b68569b85-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.244359 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f9ae84e-0afe-4ee6-881d-bd3b68569b85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f9ae84e-0afe-4ee6-881d-bd3b68569b85" (UID: "5f9ae84e-0afe-4ee6-881d-bd3b68569b85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.317872 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f9ae84e-0afe-4ee6-881d-bd3b68569b85-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.481011 4718 generic.go:334] "Generic (PLEG): container finished" podID="5f9ae84e-0afe-4ee6-881d-bd3b68569b85" containerID="b365fe769064c6033e0987eea971694c488f29a6c57fa6e9ce6d1289b05086e4" exitCode=0 Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.481054 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jklmg" event={"ID":"5f9ae84e-0afe-4ee6-881d-bd3b68569b85","Type":"ContainerDied","Data":"b365fe769064c6033e0987eea971694c488f29a6c57fa6e9ce6d1289b05086e4"} Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.481097 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jklmg" event={"ID":"5f9ae84e-0afe-4ee6-881d-bd3b68569b85","Type":"ContainerDied","Data":"4893da0bdb293d994190ca5c89d736a79bd34136a8c0aa282151e9f2ee6a4f4b"} Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.481118 4718 scope.go:117] "RemoveContainer" containerID="b365fe769064c6033e0987eea971694c488f29a6c57fa6e9ce6d1289b05086e4" Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.481131 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jklmg" Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.505763 4718 scope.go:117] "RemoveContainer" containerID="d22a27a07b991fc41e676326d4095efcc10260ae479c8e3298cbf949d7671b29" Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.515586 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jklmg"] Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.540618 4718 scope.go:117] "RemoveContainer" containerID="5d31e12859e6ec02a98a8910683b18286f867b0b052e466793bc5c9f3d3b2de5" Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.540877 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jklmg"] Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.611189 4718 scope.go:117] "RemoveContainer" containerID="b365fe769064c6033e0987eea971694c488f29a6c57fa6e9ce6d1289b05086e4" Jan 23 16:52:19 crc kubenswrapper[4718]: E0123 16:52:19.612093 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b365fe769064c6033e0987eea971694c488f29a6c57fa6e9ce6d1289b05086e4\": container with ID starting with b365fe769064c6033e0987eea971694c488f29a6c57fa6e9ce6d1289b05086e4 not found: ID does not exist" containerID="b365fe769064c6033e0987eea971694c488f29a6c57fa6e9ce6d1289b05086e4" Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.612153 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b365fe769064c6033e0987eea971694c488f29a6c57fa6e9ce6d1289b05086e4"} err="failed to get container status \"b365fe769064c6033e0987eea971694c488f29a6c57fa6e9ce6d1289b05086e4\": rpc error: code = NotFound desc = could not find container \"b365fe769064c6033e0987eea971694c488f29a6c57fa6e9ce6d1289b05086e4\": container with ID starting with b365fe769064c6033e0987eea971694c488f29a6c57fa6e9ce6d1289b05086e4 not found: ID does not exist" Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.612184 4718 scope.go:117] "RemoveContainer" containerID="d22a27a07b991fc41e676326d4095efcc10260ae479c8e3298cbf949d7671b29" Jan 23 16:52:19 crc kubenswrapper[4718]: E0123 16:52:19.612618 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d22a27a07b991fc41e676326d4095efcc10260ae479c8e3298cbf949d7671b29\": container with ID starting with d22a27a07b991fc41e676326d4095efcc10260ae479c8e3298cbf949d7671b29 not found: ID does not exist" containerID="d22a27a07b991fc41e676326d4095efcc10260ae479c8e3298cbf949d7671b29" Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.612670 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d22a27a07b991fc41e676326d4095efcc10260ae479c8e3298cbf949d7671b29"} err="failed to get container status \"d22a27a07b991fc41e676326d4095efcc10260ae479c8e3298cbf949d7671b29\": rpc error: code = NotFound desc = could not find container \"d22a27a07b991fc41e676326d4095efcc10260ae479c8e3298cbf949d7671b29\": container with ID starting with d22a27a07b991fc41e676326d4095efcc10260ae479c8e3298cbf949d7671b29 not found: ID does not exist" Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.612697 4718 scope.go:117] "RemoveContainer" containerID="5d31e12859e6ec02a98a8910683b18286f867b0b052e466793bc5c9f3d3b2de5" Jan 23 16:52:19 crc kubenswrapper[4718]: E0123 16:52:19.613007 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d31e12859e6ec02a98a8910683b18286f867b0b052e466793bc5c9f3d3b2de5\": container with ID starting with 5d31e12859e6ec02a98a8910683b18286f867b0b052e466793bc5c9f3d3b2de5 not found: ID does not exist" containerID="5d31e12859e6ec02a98a8910683b18286f867b0b052e466793bc5c9f3d3b2de5" Jan 23 16:52:19 crc kubenswrapper[4718]: I0123 16:52:19.613049 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d31e12859e6ec02a98a8910683b18286f867b0b052e466793bc5c9f3d3b2de5"} err="failed to get container status \"5d31e12859e6ec02a98a8910683b18286f867b0b052e466793bc5c9f3d3b2de5\": rpc error: code = NotFound desc = could not find container \"5d31e12859e6ec02a98a8910683b18286f867b0b052e466793bc5c9f3d3b2de5\": container with ID starting with 5d31e12859e6ec02a98a8910683b18286f867b0b052e466793bc5c9f3d3b2de5 not found: ID does not exist" Jan 23 16:52:21 crc kubenswrapper[4718]: I0123 16:52:21.153077 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f9ae84e-0afe-4ee6-881d-bd3b68569b85" path="/var/lib/kubelet/pods/5f9ae84e-0afe-4ee6-881d-bd3b68569b85/volumes" Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.666381 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k8pd8"] Jan 23 16:52:28 crc kubenswrapper[4718]: E0123 16:52:28.667315 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f9ae84e-0afe-4ee6-881d-bd3b68569b85" containerName="extract-content" Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.667328 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f9ae84e-0afe-4ee6-881d-bd3b68569b85" containerName="extract-content" Jan 23 16:52:28 crc kubenswrapper[4718]: E0123 16:52:28.667368 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f9ae84e-0afe-4ee6-881d-bd3b68569b85" containerName="registry-server" Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.667374 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f9ae84e-0afe-4ee6-881d-bd3b68569b85" containerName="registry-server" Jan 23 16:52:28 crc kubenswrapper[4718]: E0123 16:52:28.667385 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f9ae84e-0afe-4ee6-881d-bd3b68569b85" containerName="extract-utilities" Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.667391 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f9ae84e-0afe-4ee6-881d-bd3b68569b85" containerName="extract-utilities" Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.667623 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f9ae84e-0afe-4ee6-881d-bd3b68569b85" containerName="registry-server" Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.669443 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8pd8" Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.680893 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k8pd8"] Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.753813 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44-utilities\") pod \"community-operators-k8pd8\" (UID: \"fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44\") " pod="openshift-marketplace/community-operators-k8pd8" Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.753868 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rflkv\" (UniqueName: \"kubernetes.io/projected/fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44-kube-api-access-rflkv\") pod \"community-operators-k8pd8\" (UID: \"fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44\") " pod="openshift-marketplace/community-operators-k8pd8" Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.753907 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44-catalog-content\") pod \"community-operators-k8pd8\" (UID: \"fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44\") " pod="openshift-marketplace/community-operators-k8pd8" Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.856276 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44-utilities\") pod \"community-operators-k8pd8\" (UID: \"fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44\") " pod="openshift-marketplace/community-operators-k8pd8" Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.856356 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rflkv\" (UniqueName: \"kubernetes.io/projected/fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44-kube-api-access-rflkv\") pod \"community-operators-k8pd8\" (UID: \"fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44\") " pod="openshift-marketplace/community-operators-k8pd8" Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.856420 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44-catalog-content\") pod \"community-operators-k8pd8\" (UID: \"fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44\") " pod="openshift-marketplace/community-operators-k8pd8" Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.856766 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44-utilities\") pod \"community-operators-k8pd8\" (UID: \"fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44\") " pod="openshift-marketplace/community-operators-k8pd8" Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.857026 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44-catalog-content\") pod \"community-operators-k8pd8\" (UID: \"fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44\") " pod="openshift-marketplace/community-operators-k8pd8" Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.875602 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.875670 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.880178 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rflkv\" (UniqueName: \"kubernetes.io/projected/fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44-kube-api-access-rflkv\") pod \"community-operators-k8pd8\" (UID: \"fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44\") " pod="openshift-marketplace/community-operators-k8pd8" Jan 23 16:52:28 crc kubenswrapper[4718]: I0123 16:52:28.989488 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8pd8" Jan 23 16:52:29 crc kubenswrapper[4718]: I0123 16:52:29.539519 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k8pd8"] Jan 23 16:52:29 crc kubenswrapper[4718]: I0123 16:52:29.587663 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8pd8" event={"ID":"fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44","Type":"ContainerStarted","Data":"3593cb7dd7b7d3f0a65385800d77f38847f466b479d63953d60b0f846088a42d"} Jan 23 16:52:30 crc kubenswrapper[4718]: I0123 16:52:30.599755 4718 generic.go:334] "Generic (PLEG): container finished" podID="fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44" containerID="121d7cef96ff7bf9347cf0b5e46ba238f3b07a70128c5ed293650b191ad458f1" exitCode=0 Jan 23 16:52:30 crc kubenswrapper[4718]: I0123 16:52:30.599800 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8pd8" event={"ID":"fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44","Type":"ContainerDied","Data":"121d7cef96ff7bf9347cf0b5e46ba238f3b07a70128c5ed293650b191ad458f1"} Jan 23 16:52:35 crc kubenswrapper[4718]: I0123 16:52:35.648892 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8pd8" event={"ID":"fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44","Type":"ContainerStarted","Data":"b26f669e72dbe0b6e24b7ec469689e1d95dab6050d75cff9b492c825d2c42d4d"} Jan 23 16:52:36 crc kubenswrapper[4718]: I0123 16:52:36.664073 4718 generic.go:334] "Generic (PLEG): container finished" podID="fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44" containerID="b26f669e72dbe0b6e24b7ec469689e1d95dab6050d75cff9b492c825d2c42d4d" exitCode=0 Jan 23 16:52:36 crc kubenswrapper[4718]: I0123 16:52:36.664192 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8pd8" event={"ID":"fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44","Type":"ContainerDied","Data":"b26f669e72dbe0b6e24b7ec469689e1d95dab6050d75cff9b492c825d2c42d4d"} Jan 23 16:52:37 crc kubenswrapper[4718]: I0123 16:52:37.684109 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8pd8" event={"ID":"fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44","Type":"ContainerStarted","Data":"6e5748fa32111f4508990a20e5711d1725c41bb78a6537f3b381a7713684cad1"} Jan 23 16:52:37 crc kubenswrapper[4718]: I0123 16:52:37.716229 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k8pd8" podStartSLOduration=2.96633678 podStartE2EDuration="9.716205063s" podCreationTimestamp="2026-01-23 16:52:28 +0000 UTC" firstStartedPulling="2026-01-23 16:52:30.601657889 +0000 UTC m=+2151.748899880" lastFinishedPulling="2026-01-23 16:52:37.351526172 +0000 UTC m=+2158.498768163" observedRunningTime="2026-01-23 16:52:37.703848149 +0000 UTC m=+2158.851090140" watchObservedRunningTime="2026-01-23 16:52:37.716205063 +0000 UTC m=+2158.863447054" Jan 23 16:52:38 crc kubenswrapper[4718]: I0123 16:52:38.990546 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k8pd8" Jan 23 16:52:38 crc kubenswrapper[4718]: I0123 16:52:38.991980 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k8pd8" Jan 23 16:52:40 crc kubenswrapper[4718]: I0123 16:52:40.074019 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-k8pd8" podUID="fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44" containerName="registry-server" probeResult="failure" output=< Jan 23 16:52:40 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Jan 23 16:52:40 crc kubenswrapper[4718]: > Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.050765 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k8pd8" Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.128664 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k8pd8" Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.209676 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k8pd8"] Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.306941 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5cz87"] Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.307478 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5cz87" podUID="1b24d924-160f-42a8-a416-54d63a814db4" containerName="registry-server" containerID="cri-o://f9bcdb29d9e43e960cc0b70bbf45fbc988e8ff533552283892599a90177b372d" gracePeriod=2 Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.825672 4718 generic.go:334] "Generic (PLEG): container finished" podID="1b24d924-160f-42a8-a416-54d63a814db4" containerID="f9bcdb29d9e43e960cc0b70bbf45fbc988e8ff533552283892599a90177b372d" exitCode=0 Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.826893 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cz87" event={"ID":"1b24d924-160f-42a8-a416-54d63a814db4","Type":"ContainerDied","Data":"f9bcdb29d9e43e960cc0b70bbf45fbc988e8ff533552283892599a90177b372d"} Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.826950 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cz87" event={"ID":"1b24d924-160f-42a8-a416-54d63a814db4","Type":"ContainerDied","Data":"6645d63dc3a8052e6aef2f2a6d343be7dac994e19793ebc0e0fca48ed60c701d"} Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.826962 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6645d63dc3a8052e6aef2f2a6d343be7dac994e19793ebc0e0fca48ed60c701d" Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.876471 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5cz87" Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.888165 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b24d924-160f-42a8-a416-54d63a814db4-utilities\") pod \"1b24d924-160f-42a8-a416-54d63a814db4\" (UID: \"1b24d924-160f-42a8-a416-54d63a814db4\") " Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.888237 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlbww\" (UniqueName: \"kubernetes.io/projected/1b24d924-160f-42a8-a416-54d63a814db4-kube-api-access-wlbww\") pod \"1b24d924-160f-42a8-a416-54d63a814db4\" (UID: \"1b24d924-160f-42a8-a416-54d63a814db4\") " Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.888282 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b24d924-160f-42a8-a416-54d63a814db4-catalog-content\") pod \"1b24d924-160f-42a8-a416-54d63a814db4\" (UID: \"1b24d924-160f-42a8-a416-54d63a814db4\") " Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.889591 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b24d924-160f-42a8-a416-54d63a814db4-utilities" (OuterVolumeSpecName: "utilities") pod "1b24d924-160f-42a8-a416-54d63a814db4" (UID: "1b24d924-160f-42a8-a416-54d63a814db4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.897144 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b24d924-160f-42a8-a416-54d63a814db4-kube-api-access-wlbww" (OuterVolumeSpecName: "kube-api-access-wlbww") pod "1b24d924-160f-42a8-a416-54d63a814db4" (UID: "1b24d924-160f-42a8-a416-54d63a814db4"). InnerVolumeSpecName "kube-api-access-wlbww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.962338 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b24d924-160f-42a8-a416-54d63a814db4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b24d924-160f-42a8-a416-54d63a814db4" (UID: "1b24d924-160f-42a8-a416-54d63a814db4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.991885 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b24d924-160f-42a8-a416-54d63a814db4-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.991918 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlbww\" (UniqueName: \"kubernetes.io/projected/1b24d924-160f-42a8-a416-54d63a814db4-kube-api-access-wlbww\") on node \"crc\" DevicePath \"\"" Jan 23 16:52:49 crc kubenswrapper[4718]: I0123 16:52:49.991929 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b24d924-160f-42a8-a416-54d63a814db4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:52:50 crc kubenswrapper[4718]: I0123 16:52:50.837239 4718 generic.go:334] "Generic (PLEG): container finished" podID="cec28e23-37c2-4a27-872d-40cb7ad130c5" containerID="541f75c75bd6553929639824535214541d888b46cc3c08efb76197d4d9b53641" exitCode=0 Jan 23 16:52:50 crc kubenswrapper[4718]: I0123 16:52:50.837340 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx" event={"ID":"cec28e23-37c2-4a27-872d-40cb7ad130c5","Type":"ContainerDied","Data":"541f75c75bd6553929639824535214541d888b46cc3c08efb76197d4d9b53641"} Jan 23 16:52:50 crc kubenswrapper[4718]: I0123 16:52:50.837916 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5cz87" Jan 23 16:52:50 crc kubenswrapper[4718]: I0123 16:52:50.887785 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5cz87"] Jan 23 16:52:50 crc kubenswrapper[4718]: I0123 16:52:50.900622 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5cz87"] Jan 23 16:52:51 crc kubenswrapper[4718]: I0123 16:52:51.155856 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b24d924-160f-42a8-a416-54d63a814db4" path="/var/lib/kubelet/pods/1b24d924-160f-42a8-a416-54d63a814db4/volumes" Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.361190 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx" Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.450782 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9hpt\" (UniqueName: \"kubernetes.io/projected/cec28e23-37c2-4a27-872d-40cb7ad130c5-kube-api-access-d9hpt\") pod \"cec28e23-37c2-4a27-872d-40cb7ad130c5\" (UID: \"cec28e23-37c2-4a27-872d-40cb7ad130c5\") " Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.450843 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cec28e23-37c2-4a27-872d-40cb7ad130c5-ssh-key-openstack-edpm-ipam\") pod \"cec28e23-37c2-4a27-872d-40cb7ad130c5\" (UID: \"cec28e23-37c2-4a27-872d-40cb7ad130c5\") " Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.450864 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cec28e23-37c2-4a27-872d-40cb7ad130c5-inventory\") pod \"cec28e23-37c2-4a27-872d-40cb7ad130c5\" (UID: \"cec28e23-37c2-4a27-872d-40cb7ad130c5\") " Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.462080 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cec28e23-37c2-4a27-872d-40cb7ad130c5-kube-api-access-d9hpt" (OuterVolumeSpecName: "kube-api-access-d9hpt") pod "cec28e23-37c2-4a27-872d-40cb7ad130c5" (UID: "cec28e23-37c2-4a27-872d-40cb7ad130c5"). InnerVolumeSpecName "kube-api-access-d9hpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.483306 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cec28e23-37c2-4a27-872d-40cb7ad130c5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cec28e23-37c2-4a27-872d-40cb7ad130c5" (UID: "cec28e23-37c2-4a27-872d-40cb7ad130c5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.492341 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cec28e23-37c2-4a27-872d-40cb7ad130c5-inventory" (OuterVolumeSpecName: "inventory") pod "cec28e23-37c2-4a27-872d-40cb7ad130c5" (UID: "cec28e23-37c2-4a27-872d-40cb7ad130c5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.554853 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9hpt\" (UniqueName: \"kubernetes.io/projected/cec28e23-37c2-4a27-872d-40cb7ad130c5-kube-api-access-d9hpt\") on node \"crc\" DevicePath \"\"" Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.554889 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cec28e23-37c2-4a27-872d-40cb7ad130c5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.554900 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cec28e23-37c2-4a27-872d-40cb7ad130c5-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.866549 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx" event={"ID":"cec28e23-37c2-4a27-872d-40cb7ad130c5","Type":"ContainerDied","Data":"dd33e24226005839faeab886db151e575e2b621008e150c997ce920d26fa8eb4"} Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.867032 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd33e24226005839faeab886db151e575e2b621008e150c997ce920d26fa8eb4" Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.866648 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4hx" Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.995649 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg"] Jan 23 16:52:52 crc kubenswrapper[4718]: E0123 16:52:52.996654 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cec28e23-37c2-4a27-872d-40cb7ad130c5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.996699 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="cec28e23-37c2-4a27-872d-40cb7ad130c5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 16:52:52 crc kubenswrapper[4718]: E0123 16:52:52.996747 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b24d924-160f-42a8-a416-54d63a814db4" containerName="registry-server" Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.996756 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b24d924-160f-42a8-a416-54d63a814db4" containerName="registry-server" Jan 23 16:52:52 crc kubenswrapper[4718]: E0123 16:52:52.996802 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b24d924-160f-42a8-a416-54d63a814db4" containerName="extract-utilities" Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.996809 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b24d924-160f-42a8-a416-54d63a814db4" containerName="extract-utilities" Jan 23 16:52:52 crc kubenswrapper[4718]: E0123 16:52:52.996836 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b24d924-160f-42a8-a416-54d63a814db4" containerName="extract-content" Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.996843 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b24d924-160f-42a8-a416-54d63a814db4" containerName="extract-content" Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.997282 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="cec28e23-37c2-4a27-872d-40cb7ad130c5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.997306 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b24d924-160f-42a8-a416-54d63a814db4" containerName="registry-server" Jan 23 16:52:52 crc kubenswrapper[4718]: I0123 16:52:52.998423 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg" Jan 23 16:52:53 crc kubenswrapper[4718]: I0123 16:52:53.000654 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 16:52:53 crc kubenswrapper[4718]: I0123 16:52:53.001455 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 16:52:53 crc kubenswrapper[4718]: I0123 16:52:53.002068 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 16:52:53 crc kubenswrapper[4718]: I0123 16:52:53.002542 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 16:52:53 crc kubenswrapper[4718]: I0123 16:52:53.016273 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg"] Jan 23 16:52:53 crc kubenswrapper[4718]: I0123 16:52:53.067107 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fcefafb-b44e-4b47-a2f1-302f824b0dd5-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg\" (UID: \"2fcefafb-b44e-4b47-a2f1-302f824b0dd5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg" Jan 23 16:52:53 crc kubenswrapper[4718]: I0123 16:52:53.067208 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fcefafb-b44e-4b47-a2f1-302f824b0dd5-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg\" (UID: \"2fcefafb-b44e-4b47-a2f1-302f824b0dd5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg" Jan 23 16:52:53 crc kubenswrapper[4718]: I0123 16:52:53.067300 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9fkw\" (UniqueName: \"kubernetes.io/projected/2fcefafb-b44e-4b47-a2f1-302f824b0dd5-kube-api-access-t9fkw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg\" (UID: \"2fcefafb-b44e-4b47-a2f1-302f824b0dd5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg" Jan 23 16:52:53 crc kubenswrapper[4718]: I0123 16:52:53.169514 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fcefafb-b44e-4b47-a2f1-302f824b0dd5-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg\" (UID: \"2fcefafb-b44e-4b47-a2f1-302f824b0dd5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg" Jan 23 16:52:53 crc kubenswrapper[4718]: I0123 16:52:53.169655 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fcefafb-b44e-4b47-a2f1-302f824b0dd5-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg\" (UID: \"2fcefafb-b44e-4b47-a2f1-302f824b0dd5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg" Jan 23 16:52:53 crc kubenswrapper[4718]: I0123 16:52:53.169818 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9fkw\" (UniqueName: \"kubernetes.io/projected/2fcefafb-b44e-4b47-a2f1-302f824b0dd5-kube-api-access-t9fkw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg\" (UID: \"2fcefafb-b44e-4b47-a2f1-302f824b0dd5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg" Jan 23 16:52:53 crc kubenswrapper[4718]: I0123 16:52:53.174441 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fcefafb-b44e-4b47-a2f1-302f824b0dd5-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg\" (UID: \"2fcefafb-b44e-4b47-a2f1-302f824b0dd5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg" Jan 23 16:52:53 crc kubenswrapper[4718]: I0123 16:52:53.181128 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fcefafb-b44e-4b47-a2f1-302f824b0dd5-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg\" (UID: \"2fcefafb-b44e-4b47-a2f1-302f824b0dd5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg" Jan 23 16:52:53 crc kubenswrapper[4718]: I0123 16:52:53.186291 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9fkw\" (UniqueName: \"kubernetes.io/projected/2fcefafb-b44e-4b47-a2f1-302f824b0dd5-kube-api-access-t9fkw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg\" (UID: \"2fcefafb-b44e-4b47-a2f1-302f824b0dd5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg" Jan 23 16:52:53 crc kubenswrapper[4718]: I0123 16:52:53.323013 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg" Jan 23 16:52:53 crc kubenswrapper[4718]: I0123 16:52:53.870819 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg"] Jan 23 16:52:53 crc kubenswrapper[4718]: I0123 16:52:53.877782 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg" event={"ID":"2fcefafb-b44e-4b47-a2f1-302f824b0dd5","Type":"ContainerStarted","Data":"1d7716e88ba1db00694c4f258d34254216dbcd138ba2bfcf1063671fb91f79a5"} Jan 23 16:52:54 crc kubenswrapper[4718]: I0123 16:52:54.895938 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg" event={"ID":"2fcefafb-b44e-4b47-a2f1-302f824b0dd5","Type":"ContainerStarted","Data":"65caf0da00d177a7aecd1038ba47956eae8e27a08251eabe9d4759b47e8c705d"} Jan 23 16:52:54 crc kubenswrapper[4718]: I0123 16:52:54.924616 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg" podStartSLOduration=2.364568791 podStartE2EDuration="2.924592555s" podCreationTimestamp="2026-01-23 16:52:52 +0000 UTC" firstStartedPulling="2026-01-23 16:52:53.863997097 +0000 UTC m=+2175.011239088" lastFinishedPulling="2026-01-23 16:52:54.424020861 +0000 UTC m=+2175.571262852" observedRunningTime="2026-01-23 16:52:54.919375094 +0000 UTC m=+2176.066617105" watchObservedRunningTime="2026-01-23 16:52:54.924592555 +0000 UTC m=+2176.071834546" Jan 23 16:52:58 crc kubenswrapper[4718]: I0123 16:52:58.875387 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:52:58 crc kubenswrapper[4718]: I0123 16:52:58.876100 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:53:01 crc kubenswrapper[4718]: I0123 16:53:01.050803 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-5phqp"] Jan 23 16:53:01 crc kubenswrapper[4718]: I0123 16:53:01.063296 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-5phqp"] Jan 23 16:53:01 crc kubenswrapper[4718]: I0123 16:53:01.158736 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab31210-204e-4f0c-9aa7-ef99dc7db5c3" path="/var/lib/kubelet/pods/0ab31210-204e-4f0c-9aa7-ef99dc7db5c3/volumes" Jan 23 16:53:01 crc kubenswrapper[4718]: I0123 16:53:01.692359 4718 scope.go:117] "RemoveContainer" containerID="f9bcdb29d9e43e960cc0b70bbf45fbc988e8ff533552283892599a90177b372d" Jan 23 16:53:01 crc kubenswrapper[4718]: I0123 16:53:01.725995 4718 scope.go:117] "RemoveContainer" containerID="b8cc7436f0f6df1b416a761d350d4d69144230ab40e782642ae6173633dd86ac" Jan 23 16:53:01 crc kubenswrapper[4718]: I0123 16:53:01.753915 4718 scope.go:117] "RemoveContainer" containerID="e996032703d2a23cedc05d499219975ca23fd258a96ea407eb5ca222051fccb1" Jan 23 16:53:01 crc kubenswrapper[4718]: I0123 16:53:01.832599 4718 scope.go:117] "RemoveContainer" containerID="9b50948d2400b9364d2d5853578db4b52537d85e47045765da022249d15f8d49" Jan 23 16:53:28 crc kubenswrapper[4718]: I0123 16:53:28.875557 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:53:28 crc kubenswrapper[4718]: I0123 16:53:28.876213 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:53:28 crc kubenswrapper[4718]: I0123 16:53:28.876271 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:53:28 crc kubenswrapper[4718]: I0123 16:53:28.877299 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cd13cf13fbb6dcb61c95d462a30fb29710e34c4db432a129f54b3375b2b755c4"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 16:53:28 crc kubenswrapper[4718]: I0123 16:53:28.877358 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://cd13cf13fbb6dcb61c95d462a30fb29710e34c4db432a129f54b3375b2b755c4" gracePeriod=600 Jan 23 16:53:29 crc kubenswrapper[4718]: I0123 16:53:29.295869 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="cd13cf13fbb6dcb61c95d462a30fb29710e34c4db432a129f54b3375b2b755c4" exitCode=0 Jan 23 16:53:29 crc kubenswrapper[4718]: I0123 16:53:29.296452 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"cd13cf13fbb6dcb61c95d462a30fb29710e34c4db432a129f54b3375b2b755c4"} Jan 23 16:53:29 crc kubenswrapper[4718]: I0123 16:53:29.296479 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512"} Jan 23 16:53:29 crc kubenswrapper[4718]: I0123 16:53:29.296500 4718 scope.go:117] "RemoveContainer" containerID="b1d67526e8b8c6c91f727cad66af68879bcf5e45fc1156d22f02c03f57d39aec" Jan 23 16:53:47 crc kubenswrapper[4718]: I0123 16:53:47.057566 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-5j2qd"] Jan 23 16:53:47 crc kubenswrapper[4718]: I0123 16:53:47.069525 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-5j2qd"] Jan 23 16:53:47 crc kubenswrapper[4718]: I0123 16:53:47.154370 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4" path="/var/lib/kubelet/pods/e8fc39ee-d9ec-43b5-89a6-4a8e31a4f6c4/volumes" Jan 23 16:53:48 crc kubenswrapper[4718]: I0123 16:53:48.528703 4718 generic.go:334] "Generic (PLEG): container finished" podID="2fcefafb-b44e-4b47-a2f1-302f824b0dd5" containerID="65caf0da00d177a7aecd1038ba47956eae8e27a08251eabe9d4759b47e8c705d" exitCode=0 Jan 23 16:53:48 crc kubenswrapper[4718]: I0123 16:53:48.528801 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg" event={"ID":"2fcefafb-b44e-4b47-a2f1-302f824b0dd5","Type":"ContainerDied","Data":"65caf0da00d177a7aecd1038ba47956eae8e27a08251eabe9d4759b47e8c705d"} Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.012859 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.191211 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fcefafb-b44e-4b47-a2f1-302f824b0dd5-inventory\") pod \"2fcefafb-b44e-4b47-a2f1-302f824b0dd5\" (UID: \"2fcefafb-b44e-4b47-a2f1-302f824b0dd5\") " Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.191486 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fcefafb-b44e-4b47-a2f1-302f824b0dd5-ssh-key-openstack-edpm-ipam\") pod \"2fcefafb-b44e-4b47-a2f1-302f824b0dd5\" (UID: \"2fcefafb-b44e-4b47-a2f1-302f824b0dd5\") " Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.191838 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9fkw\" (UniqueName: \"kubernetes.io/projected/2fcefafb-b44e-4b47-a2f1-302f824b0dd5-kube-api-access-t9fkw\") pod \"2fcefafb-b44e-4b47-a2f1-302f824b0dd5\" (UID: \"2fcefafb-b44e-4b47-a2f1-302f824b0dd5\") " Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.197529 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fcefafb-b44e-4b47-a2f1-302f824b0dd5-kube-api-access-t9fkw" (OuterVolumeSpecName: "kube-api-access-t9fkw") pod "2fcefafb-b44e-4b47-a2f1-302f824b0dd5" (UID: "2fcefafb-b44e-4b47-a2f1-302f824b0dd5"). InnerVolumeSpecName "kube-api-access-t9fkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.231446 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fcefafb-b44e-4b47-a2f1-302f824b0dd5-inventory" (OuterVolumeSpecName: "inventory") pod "2fcefafb-b44e-4b47-a2f1-302f824b0dd5" (UID: "2fcefafb-b44e-4b47-a2f1-302f824b0dd5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.233550 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fcefafb-b44e-4b47-a2f1-302f824b0dd5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2fcefafb-b44e-4b47-a2f1-302f824b0dd5" (UID: "2fcefafb-b44e-4b47-a2f1-302f824b0dd5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.294523 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fcefafb-b44e-4b47-a2f1-302f824b0dd5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.294558 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9fkw\" (UniqueName: \"kubernetes.io/projected/2fcefafb-b44e-4b47-a2f1-302f824b0dd5-kube-api-access-t9fkw\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.294572 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fcefafb-b44e-4b47-a2f1-302f824b0dd5-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.554896 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg" event={"ID":"2fcefafb-b44e-4b47-a2f1-302f824b0dd5","Type":"ContainerDied","Data":"1d7716e88ba1db00694c4f258d34254216dbcd138ba2bfcf1063671fb91f79a5"} Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.554939 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d7716e88ba1db00694c4f258d34254216dbcd138ba2bfcf1063671fb91f79a5" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.554996 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.715147 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-46ztt"] Jan 23 16:53:50 crc kubenswrapper[4718]: E0123 16:53:50.716458 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fcefafb-b44e-4b47-a2f1-302f824b0dd5" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.716488 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fcefafb-b44e-4b47-a2f1-302f824b0dd5" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.716788 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fcefafb-b44e-4b47-a2f1-302f824b0dd5" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.718140 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-46ztt" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.720942 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.721293 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.721496 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.721575 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.737083 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-46ztt"] Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.908596 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c13cc81-99d0-465b-a13d-638a7482f669-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-46ztt\" (UID: \"1c13cc81-99d0-465b-a13d-638a7482f669\") " pod="openstack/ssh-known-hosts-edpm-deployment-46ztt" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.909032 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1c13cc81-99d0-465b-a13d-638a7482f669-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-46ztt\" (UID: \"1c13cc81-99d0-465b-a13d-638a7482f669\") " pod="openstack/ssh-known-hosts-edpm-deployment-46ztt" Jan 23 16:53:50 crc kubenswrapper[4718]: I0123 16:53:50.909718 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcj5r\" (UniqueName: \"kubernetes.io/projected/1c13cc81-99d0-465b-a13d-638a7482f669-kube-api-access-pcj5r\") pod \"ssh-known-hosts-edpm-deployment-46ztt\" (UID: \"1c13cc81-99d0-465b-a13d-638a7482f669\") " pod="openstack/ssh-known-hosts-edpm-deployment-46ztt" Jan 23 16:53:51 crc kubenswrapper[4718]: I0123 16:53:51.011837 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c13cc81-99d0-465b-a13d-638a7482f669-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-46ztt\" (UID: \"1c13cc81-99d0-465b-a13d-638a7482f669\") " pod="openstack/ssh-known-hosts-edpm-deployment-46ztt" Jan 23 16:53:51 crc kubenswrapper[4718]: I0123 16:53:51.011960 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1c13cc81-99d0-465b-a13d-638a7482f669-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-46ztt\" (UID: \"1c13cc81-99d0-465b-a13d-638a7482f669\") " pod="openstack/ssh-known-hosts-edpm-deployment-46ztt" Jan 23 16:53:51 crc kubenswrapper[4718]: I0123 16:53:51.012025 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcj5r\" (UniqueName: \"kubernetes.io/projected/1c13cc81-99d0-465b-a13d-638a7482f669-kube-api-access-pcj5r\") pod \"ssh-known-hosts-edpm-deployment-46ztt\" (UID: \"1c13cc81-99d0-465b-a13d-638a7482f669\") " pod="openstack/ssh-known-hosts-edpm-deployment-46ztt" Jan 23 16:53:51 crc kubenswrapper[4718]: I0123 16:53:51.016992 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1c13cc81-99d0-465b-a13d-638a7482f669-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-46ztt\" (UID: \"1c13cc81-99d0-465b-a13d-638a7482f669\") " pod="openstack/ssh-known-hosts-edpm-deployment-46ztt" Jan 23 16:53:51 crc kubenswrapper[4718]: I0123 16:53:51.017458 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c13cc81-99d0-465b-a13d-638a7482f669-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-46ztt\" (UID: \"1c13cc81-99d0-465b-a13d-638a7482f669\") " pod="openstack/ssh-known-hosts-edpm-deployment-46ztt" Jan 23 16:53:51 crc kubenswrapper[4718]: I0123 16:53:51.033603 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcj5r\" (UniqueName: \"kubernetes.io/projected/1c13cc81-99d0-465b-a13d-638a7482f669-kube-api-access-pcj5r\") pod \"ssh-known-hosts-edpm-deployment-46ztt\" (UID: \"1c13cc81-99d0-465b-a13d-638a7482f669\") " pod="openstack/ssh-known-hosts-edpm-deployment-46ztt" Jan 23 16:53:51 crc kubenswrapper[4718]: I0123 16:53:51.052756 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-46ztt" Jan 23 16:53:51 crc kubenswrapper[4718]: I0123 16:53:51.661494 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-46ztt"] Jan 23 16:53:51 crc kubenswrapper[4718]: W0123 16:53:51.664841 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c13cc81_99d0_465b_a13d_638a7482f669.slice/crio-4c273f7aa9340ad57dff1efba021483a188035d1dbbb8c212eaf31462c493aa5 WatchSource:0}: Error finding container 4c273f7aa9340ad57dff1efba021483a188035d1dbbb8c212eaf31462c493aa5: Status 404 returned error can't find the container with id 4c273f7aa9340ad57dff1efba021483a188035d1dbbb8c212eaf31462c493aa5 Jan 23 16:53:51 crc kubenswrapper[4718]: I0123 16:53:51.669561 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 16:53:52 crc kubenswrapper[4718]: I0123 16:53:52.578275 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-46ztt" event={"ID":"1c13cc81-99d0-465b-a13d-638a7482f669","Type":"ContainerStarted","Data":"4c273f7aa9340ad57dff1efba021483a188035d1dbbb8c212eaf31462c493aa5"} Jan 23 16:53:53 crc kubenswrapper[4718]: I0123 16:53:53.588362 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-46ztt" event={"ID":"1c13cc81-99d0-465b-a13d-638a7482f669","Type":"ContainerStarted","Data":"6a2cf1f7366681e4b863c0ac0357d5fd3100165c5bda176be085da431e792310"} Jan 23 16:53:53 crc kubenswrapper[4718]: I0123 16:53:53.613223 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-46ztt" podStartSLOduration=2.90094424 podStartE2EDuration="3.61320489s" podCreationTimestamp="2026-01-23 16:53:50 +0000 UTC" firstStartedPulling="2026-01-23 16:53:51.669283555 +0000 UTC m=+2232.816525556" lastFinishedPulling="2026-01-23 16:53:52.381544205 +0000 UTC m=+2233.528786206" observedRunningTime="2026-01-23 16:53:53.604106254 +0000 UTC m=+2234.751348245" watchObservedRunningTime="2026-01-23 16:53:53.61320489 +0000 UTC m=+2234.760446881" Jan 23 16:53:59 crc kubenswrapper[4718]: I0123 16:53:59.664090 4718 generic.go:334] "Generic (PLEG): container finished" podID="1c13cc81-99d0-465b-a13d-638a7482f669" containerID="6a2cf1f7366681e4b863c0ac0357d5fd3100165c5bda176be085da431e792310" exitCode=0 Jan 23 16:53:59 crc kubenswrapper[4718]: I0123 16:53:59.664142 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-46ztt" event={"ID":"1c13cc81-99d0-465b-a13d-638a7482f669","Type":"ContainerDied","Data":"6a2cf1f7366681e4b863c0ac0357d5fd3100165c5bda176be085da431e792310"} Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.263419 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-46ztt" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.368770 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcj5r\" (UniqueName: \"kubernetes.io/projected/1c13cc81-99d0-465b-a13d-638a7482f669-kube-api-access-pcj5r\") pod \"1c13cc81-99d0-465b-a13d-638a7482f669\" (UID: \"1c13cc81-99d0-465b-a13d-638a7482f669\") " Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.368846 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c13cc81-99d0-465b-a13d-638a7482f669-ssh-key-openstack-edpm-ipam\") pod \"1c13cc81-99d0-465b-a13d-638a7482f669\" (UID: \"1c13cc81-99d0-465b-a13d-638a7482f669\") " Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.369115 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1c13cc81-99d0-465b-a13d-638a7482f669-inventory-0\") pod \"1c13cc81-99d0-465b-a13d-638a7482f669\" (UID: \"1c13cc81-99d0-465b-a13d-638a7482f669\") " Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.375827 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c13cc81-99d0-465b-a13d-638a7482f669-kube-api-access-pcj5r" (OuterVolumeSpecName: "kube-api-access-pcj5r") pod "1c13cc81-99d0-465b-a13d-638a7482f669" (UID: "1c13cc81-99d0-465b-a13d-638a7482f669"). InnerVolumeSpecName "kube-api-access-pcj5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.410835 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c13cc81-99d0-465b-a13d-638a7482f669-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1c13cc81-99d0-465b-a13d-638a7482f669" (UID: "1c13cc81-99d0-465b-a13d-638a7482f669"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.441514 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c13cc81-99d0-465b-a13d-638a7482f669-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "1c13cc81-99d0-465b-a13d-638a7482f669" (UID: "1c13cc81-99d0-465b-a13d-638a7482f669"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.472400 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcj5r\" (UniqueName: \"kubernetes.io/projected/1c13cc81-99d0-465b-a13d-638a7482f669-kube-api-access-pcj5r\") on node \"crc\" DevicePath \"\"" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.472443 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c13cc81-99d0-465b-a13d-638a7482f669-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.472455 4718 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1c13cc81-99d0-465b-a13d-638a7482f669-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.683493 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-46ztt" event={"ID":"1c13cc81-99d0-465b-a13d-638a7482f669","Type":"ContainerDied","Data":"4c273f7aa9340ad57dff1efba021483a188035d1dbbb8c212eaf31462c493aa5"} Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.683534 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c273f7aa9340ad57dff1efba021483a188035d1dbbb8c212eaf31462c493aa5" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.683591 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-46ztt" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.791190 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc"] Jan 23 16:54:01 crc kubenswrapper[4718]: E0123 16:54:01.791838 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c13cc81-99d0-465b-a13d-638a7482f669" containerName="ssh-known-hosts-edpm-deployment" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.791860 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c13cc81-99d0-465b-a13d-638a7482f669" containerName="ssh-known-hosts-edpm-deployment" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.792092 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c13cc81-99d0-465b-a13d-638a7482f669" containerName="ssh-known-hosts-edpm-deployment" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.793017 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.797194 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.797456 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.797568 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.797644 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.811278 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc"] Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.883107 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/876f2274-0082-4049-a9a1-e8ed6b517b57-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cbknc\" (UID: \"876f2274-0082-4049-a9a1-e8ed6b517b57\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.883445 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gz44\" (UniqueName: \"kubernetes.io/projected/876f2274-0082-4049-a9a1-e8ed6b517b57-kube-api-access-4gz44\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cbknc\" (UID: \"876f2274-0082-4049-a9a1-e8ed6b517b57\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.883940 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/876f2274-0082-4049-a9a1-e8ed6b517b57-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cbknc\" (UID: \"876f2274-0082-4049-a9a1-e8ed6b517b57\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.951565 4718 scope.go:117] "RemoveContainer" containerID="fa9f9f05c388113db8d8f34bfc6d659790fd3bed01a9cc36ba95a25f4b0f8ce2" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.986015 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/876f2274-0082-4049-a9a1-e8ed6b517b57-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cbknc\" (UID: \"876f2274-0082-4049-a9a1-e8ed6b517b57\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.986078 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gz44\" (UniqueName: \"kubernetes.io/projected/876f2274-0082-4049-a9a1-e8ed6b517b57-kube-api-access-4gz44\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cbknc\" (UID: \"876f2274-0082-4049-a9a1-e8ed6b517b57\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.986171 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/876f2274-0082-4049-a9a1-e8ed6b517b57-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cbknc\" (UID: \"876f2274-0082-4049-a9a1-e8ed6b517b57\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.990389 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/876f2274-0082-4049-a9a1-e8ed6b517b57-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cbknc\" (UID: \"876f2274-0082-4049-a9a1-e8ed6b517b57\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc" Jan 23 16:54:01 crc kubenswrapper[4718]: I0123 16:54:01.990871 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/876f2274-0082-4049-a9a1-e8ed6b517b57-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cbknc\" (UID: \"876f2274-0082-4049-a9a1-e8ed6b517b57\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc" Jan 23 16:54:02 crc kubenswrapper[4718]: I0123 16:54:02.003759 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gz44\" (UniqueName: \"kubernetes.io/projected/876f2274-0082-4049-a9a1-e8ed6b517b57-kube-api-access-4gz44\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cbknc\" (UID: \"876f2274-0082-4049-a9a1-e8ed6b517b57\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc" Jan 23 16:54:02 crc kubenswrapper[4718]: I0123 16:54:02.117731 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc" Jan 23 16:54:02 crc kubenswrapper[4718]: I0123 16:54:02.656249 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc"] Jan 23 16:54:02 crc kubenswrapper[4718]: I0123 16:54:02.695495 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc" event={"ID":"876f2274-0082-4049-a9a1-e8ed6b517b57","Type":"ContainerStarted","Data":"e81ba2cb48433bcbf282f743aeef0e1e9852904f32cfe97dc97c4c076ed73c61"} Jan 23 16:54:03 crc kubenswrapper[4718]: I0123 16:54:03.713281 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc" event={"ID":"876f2274-0082-4049-a9a1-e8ed6b517b57","Type":"ContainerStarted","Data":"46f72aa609546a9bab64baa0cee9f1278a41eca3e8981dfd12f45bb1861605c1"} Jan 23 16:54:03 crc kubenswrapper[4718]: I0123 16:54:03.734184 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc" podStartSLOduration=2.295770693 podStartE2EDuration="2.734163132s" podCreationTimestamp="2026-01-23 16:54:01 +0000 UTC" firstStartedPulling="2026-01-23 16:54:02.65758396 +0000 UTC m=+2243.804825951" lastFinishedPulling="2026-01-23 16:54:03.095976399 +0000 UTC m=+2244.243218390" observedRunningTime="2026-01-23 16:54:03.73112241 +0000 UTC m=+2244.878364411" watchObservedRunningTime="2026-01-23 16:54:03.734163132 +0000 UTC m=+2244.881405123" Jan 23 16:54:11 crc kubenswrapper[4718]: I0123 16:54:11.804422 4718 generic.go:334] "Generic (PLEG): container finished" podID="876f2274-0082-4049-a9a1-e8ed6b517b57" containerID="46f72aa609546a9bab64baa0cee9f1278a41eca3e8981dfd12f45bb1861605c1" exitCode=0 Jan 23 16:54:11 crc kubenswrapper[4718]: I0123 16:54:11.804506 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc" event={"ID":"876f2274-0082-4049-a9a1-e8ed6b517b57","Type":"ContainerDied","Data":"46f72aa609546a9bab64baa0cee9f1278a41eca3e8981dfd12f45bb1861605c1"} Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.294047 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc" Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.401023 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gz44\" (UniqueName: \"kubernetes.io/projected/876f2274-0082-4049-a9a1-e8ed6b517b57-kube-api-access-4gz44\") pod \"876f2274-0082-4049-a9a1-e8ed6b517b57\" (UID: \"876f2274-0082-4049-a9a1-e8ed6b517b57\") " Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.401217 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/876f2274-0082-4049-a9a1-e8ed6b517b57-ssh-key-openstack-edpm-ipam\") pod \"876f2274-0082-4049-a9a1-e8ed6b517b57\" (UID: \"876f2274-0082-4049-a9a1-e8ed6b517b57\") " Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.401403 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/876f2274-0082-4049-a9a1-e8ed6b517b57-inventory\") pod \"876f2274-0082-4049-a9a1-e8ed6b517b57\" (UID: \"876f2274-0082-4049-a9a1-e8ed6b517b57\") " Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.413934 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/876f2274-0082-4049-a9a1-e8ed6b517b57-kube-api-access-4gz44" (OuterVolumeSpecName: "kube-api-access-4gz44") pod "876f2274-0082-4049-a9a1-e8ed6b517b57" (UID: "876f2274-0082-4049-a9a1-e8ed6b517b57"). InnerVolumeSpecName "kube-api-access-4gz44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.431833 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/876f2274-0082-4049-a9a1-e8ed6b517b57-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "876f2274-0082-4049-a9a1-e8ed6b517b57" (UID: "876f2274-0082-4049-a9a1-e8ed6b517b57"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.436857 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/876f2274-0082-4049-a9a1-e8ed6b517b57-inventory" (OuterVolumeSpecName: "inventory") pod "876f2274-0082-4049-a9a1-e8ed6b517b57" (UID: "876f2274-0082-4049-a9a1-e8ed6b517b57"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.505716 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/876f2274-0082-4049-a9a1-e8ed6b517b57-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.505757 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/876f2274-0082-4049-a9a1-e8ed6b517b57-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.506014 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gz44\" (UniqueName: \"kubernetes.io/projected/876f2274-0082-4049-a9a1-e8ed6b517b57-kube-api-access-4gz44\") on node \"crc\" DevicePath \"\"" Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.824587 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc" event={"ID":"876f2274-0082-4049-a9a1-e8ed6b517b57","Type":"ContainerDied","Data":"e81ba2cb48433bcbf282f743aeef0e1e9852904f32cfe97dc97c4c076ed73c61"} Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.824883 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e81ba2cb48433bcbf282f743aeef0e1e9852904f32cfe97dc97c4c076ed73c61" Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.824637 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cbknc" Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.941201 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh"] Jan 23 16:54:13 crc kubenswrapper[4718]: E0123 16:54:13.941718 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="876f2274-0082-4049-a9a1-e8ed6b517b57" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.941736 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="876f2274-0082-4049-a9a1-e8ed6b517b57" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.941981 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="876f2274-0082-4049-a9a1-e8ed6b517b57" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.942824 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh" Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.945132 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.945289 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.945142 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.950035 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 16:54:13 crc kubenswrapper[4718]: I0123 16:54:13.956258 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh"] Jan 23 16:54:14 crc kubenswrapper[4718]: I0123 16:54:14.124728 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6flmj\" (UniqueName: \"kubernetes.io/projected/ecb3058c-dfcb-4950-8c2a-3dba0200135f-kube-api-access-6flmj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh\" (UID: \"ecb3058c-dfcb-4950-8c2a-3dba0200135f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh" Jan 23 16:54:14 crc kubenswrapper[4718]: I0123 16:54:14.124858 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecb3058c-dfcb-4950-8c2a-3dba0200135f-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh\" (UID: \"ecb3058c-dfcb-4950-8c2a-3dba0200135f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh" Jan 23 16:54:14 crc kubenswrapper[4718]: I0123 16:54:14.124923 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecb3058c-dfcb-4950-8c2a-3dba0200135f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh\" (UID: \"ecb3058c-dfcb-4950-8c2a-3dba0200135f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh" Jan 23 16:54:14 crc kubenswrapper[4718]: I0123 16:54:14.228032 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecb3058c-dfcb-4950-8c2a-3dba0200135f-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh\" (UID: \"ecb3058c-dfcb-4950-8c2a-3dba0200135f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh" Jan 23 16:54:14 crc kubenswrapper[4718]: I0123 16:54:14.228127 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecb3058c-dfcb-4950-8c2a-3dba0200135f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh\" (UID: \"ecb3058c-dfcb-4950-8c2a-3dba0200135f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh" Jan 23 16:54:14 crc kubenswrapper[4718]: I0123 16:54:14.228304 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6flmj\" (UniqueName: \"kubernetes.io/projected/ecb3058c-dfcb-4950-8c2a-3dba0200135f-kube-api-access-6flmj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh\" (UID: \"ecb3058c-dfcb-4950-8c2a-3dba0200135f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh" Jan 23 16:54:14 crc kubenswrapper[4718]: I0123 16:54:14.233195 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecb3058c-dfcb-4950-8c2a-3dba0200135f-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh\" (UID: \"ecb3058c-dfcb-4950-8c2a-3dba0200135f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh" Jan 23 16:54:14 crc kubenswrapper[4718]: I0123 16:54:14.233195 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecb3058c-dfcb-4950-8c2a-3dba0200135f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh\" (UID: \"ecb3058c-dfcb-4950-8c2a-3dba0200135f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh" Jan 23 16:54:14 crc kubenswrapper[4718]: I0123 16:54:14.248454 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6flmj\" (UniqueName: \"kubernetes.io/projected/ecb3058c-dfcb-4950-8c2a-3dba0200135f-kube-api-access-6flmj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh\" (UID: \"ecb3058c-dfcb-4950-8c2a-3dba0200135f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh" Jan 23 16:54:14 crc kubenswrapper[4718]: I0123 16:54:14.271751 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh" Jan 23 16:54:14 crc kubenswrapper[4718]: I0123 16:54:14.811670 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh"] Jan 23 16:54:14 crc kubenswrapper[4718]: I0123 16:54:14.835468 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh" event={"ID":"ecb3058c-dfcb-4950-8c2a-3dba0200135f","Type":"ContainerStarted","Data":"133ad3092e4330ab50b4a3bc86210041a99da1d3d747170d0d2cdfd699efcfbb"} Jan 23 16:54:15 crc kubenswrapper[4718]: I0123 16:54:15.847426 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh" event={"ID":"ecb3058c-dfcb-4950-8c2a-3dba0200135f","Type":"ContainerStarted","Data":"8db02033aa34a0d2e74598e149b899e4e6eb2f5691e481237e0878d55274df95"} Jan 23 16:54:15 crc kubenswrapper[4718]: I0123 16:54:15.866673 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh" podStartSLOduration=2.383492247 podStartE2EDuration="2.866624199s" podCreationTimestamp="2026-01-23 16:54:13 +0000 UTC" firstStartedPulling="2026-01-23 16:54:14.806866222 +0000 UTC m=+2255.954108203" lastFinishedPulling="2026-01-23 16:54:15.289998164 +0000 UTC m=+2256.437240155" observedRunningTime="2026-01-23 16:54:15.860465223 +0000 UTC m=+2257.007707214" watchObservedRunningTime="2026-01-23 16:54:15.866624199 +0000 UTC m=+2257.013866200" Jan 23 16:54:25 crc kubenswrapper[4718]: I0123 16:54:25.293915 4718 generic.go:334] "Generic (PLEG): container finished" podID="ecb3058c-dfcb-4950-8c2a-3dba0200135f" containerID="8db02033aa34a0d2e74598e149b899e4e6eb2f5691e481237e0878d55274df95" exitCode=0 Jan 23 16:54:25 crc kubenswrapper[4718]: I0123 16:54:25.294027 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh" event={"ID":"ecb3058c-dfcb-4950-8c2a-3dba0200135f","Type":"ContainerDied","Data":"8db02033aa34a0d2e74598e149b899e4e6eb2f5691e481237e0878d55274df95"} Jan 23 16:54:26 crc kubenswrapper[4718]: I0123 16:54:26.793952 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh" Jan 23 16:54:26 crc kubenswrapper[4718]: I0123 16:54:26.849750 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecb3058c-dfcb-4950-8c2a-3dba0200135f-ssh-key-openstack-edpm-ipam\") pod \"ecb3058c-dfcb-4950-8c2a-3dba0200135f\" (UID: \"ecb3058c-dfcb-4950-8c2a-3dba0200135f\") " Jan 23 16:54:26 crc kubenswrapper[4718]: I0123 16:54:26.850266 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecb3058c-dfcb-4950-8c2a-3dba0200135f-inventory\") pod \"ecb3058c-dfcb-4950-8c2a-3dba0200135f\" (UID: \"ecb3058c-dfcb-4950-8c2a-3dba0200135f\") " Jan 23 16:54:26 crc kubenswrapper[4718]: I0123 16:54:26.850561 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6flmj\" (UniqueName: \"kubernetes.io/projected/ecb3058c-dfcb-4950-8c2a-3dba0200135f-kube-api-access-6flmj\") pod \"ecb3058c-dfcb-4950-8c2a-3dba0200135f\" (UID: \"ecb3058c-dfcb-4950-8c2a-3dba0200135f\") " Jan 23 16:54:26 crc kubenswrapper[4718]: I0123 16:54:26.857593 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecb3058c-dfcb-4950-8c2a-3dba0200135f-kube-api-access-6flmj" (OuterVolumeSpecName: "kube-api-access-6flmj") pod "ecb3058c-dfcb-4950-8c2a-3dba0200135f" (UID: "ecb3058c-dfcb-4950-8c2a-3dba0200135f"). InnerVolumeSpecName "kube-api-access-6flmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:54:26 crc kubenswrapper[4718]: I0123 16:54:26.860061 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6flmj\" (UniqueName: \"kubernetes.io/projected/ecb3058c-dfcb-4950-8c2a-3dba0200135f-kube-api-access-6flmj\") on node \"crc\" DevicePath \"\"" Jan 23 16:54:26 crc kubenswrapper[4718]: I0123 16:54:26.888764 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecb3058c-dfcb-4950-8c2a-3dba0200135f-inventory" (OuterVolumeSpecName: "inventory") pod "ecb3058c-dfcb-4950-8c2a-3dba0200135f" (UID: "ecb3058c-dfcb-4950-8c2a-3dba0200135f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:54:26 crc kubenswrapper[4718]: I0123 16:54:26.897193 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecb3058c-dfcb-4950-8c2a-3dba0200135f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ecb3058c-dfcb-4950-8c2a-3dba0200135f" (UID: "ecb3058c-dfcb-4950-8c2a-3dba0200135f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:54:26 crc kubenswrapper[4718]: I0123 16:54:26.963024 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecb3058c-dfcb-4950-8c2a-3dba0200135f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 16:54:26 crc kubenswrapper[4718]: I0123 16:54:26.963101 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecb3058c-dfcb-4950-8c2a-3dba0200135f-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.328480 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh" event={"ID":"ecb3058c-dfcb-4950-8c2a-3dba0200135f","Type":"ContainerDied","Data":"133ad3092e4330ab50b4a3bc86210041a99da1d3d747170d0d2cdfd699efcfbb"} Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.328527 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="133ad3092e4330ab50b4a3bc86210041a99da1d3d747170d0d2cdfd699efcfbb" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.328573 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.414716 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8"] Jan 23 16:54:27 crc kubenswrapper[4718]: E0123 16:54:27.415330 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb3058c-dfcb-4950-8c2a-3dba0200135f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.415359 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb3058c-dfcb-4950-8c2a-3dba0200135f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.415586 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb3058c-dfcb-4950-8c2a-3dba0200135f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.416400 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.419275 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.419450 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.419555 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.419565 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.419934 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.420175 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.420812 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.421585 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.422400 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.442486 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8"] Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.474266 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.474313 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbdz4\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-kube-api-access-jbdz4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.474367 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.474387 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.474406 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.474465 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.474509 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.474596 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.474619 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.474671 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.475037 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.475168 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.475227 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.475266 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.475317 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.475423 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.576867 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.576914 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.576956 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.576997 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.577024 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.577044 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.577132 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.577157 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.577175 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbdz4\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-kube-api-access-jbdz4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.577216 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.577236 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.577255 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.578243 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.578348 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.578448 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.578501 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.582150 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.582433 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.582545 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.582594 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.582728 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.582751 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.583348 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.583441 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.583487 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.584121 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.584477 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.584750 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.585421 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.590811 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.593149 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.594342 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbdz4\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-kube-api-access-jbdz4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k98h8\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:27 crc kubenswrapper[4718]: I0123 16:54:27.738470 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:54:28 crc kubenswrapper[4718]: I0123 16:54:28.299758 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8"] Jan 23 16:54:28 crc kubenswrapper[4718]: I0123 16:54:28.347523 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" event={"ID":"e4ba8316-551c-484b-b458-1feab6b0e72b","Type":"ContainerStarted","Data":"9325665e0f8b67edb97047beb4d9cbc1a7145384a869755c69eb3fc1e6402d3a"} Jan 23 16:54:29 crc kubenswrapper[4718]: I0123 16:54:29.359578 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" event={"ID":"e4ba8316-551c-484b-b458-1feab6b0e72b","Type":"ContainerStarted","Data":"4bf0a19e695c5fa698b54d5b60e7196fce21ea3fab53abdb49297a4812ffef27"} Jan 23 16:54:29 crc kubenswrapper[4718]: I0123 16:54:29.392388 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" podStartSLOduration=1.850539499 podStartE2EDuration="2.392366595s" podCreationTimestamp="2026-01-23 16:54:27 +0000 UTC" firstStartedPulling="2026-01-23 16:54:28.304885178 +0000 UTC m=+2269.452127179" lastFinishedPulling="2026-01-23 16:54:28.846712234 +0000 UTC m=+2269.993954275" observedRunningTime="2026-01-23 16:54:29.385195001 +0000 UTC m=+2270.532436992" watchObservedRunningTime="2026-01-23 16:54:29.392366595 +0000 UTC m=+2270.539608586" Jan 23 16:55:12 crc kubenswrapper[4718]: I0123 16:55:12.899499 4718 generic.go:334] "Generic (PLEG): container finished" podID="e4ba8316-551c-484b-b458-1feab6b0e72b" containerID="4bf0a19e695c5fa698b54d5b60e7196fce21ea3fab53abdb49297a4812ffef27" exitCode=0 Jan 23 16:55:12 crc kubenswrapper[4718]: I0123 16:55:12.899529 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" event={"ID":"e4ba8316-551c-484b-b458-1feab6b0e72b","Type":"ContainerDied","Data":"4bf0a19e695c5fa698b54d5b60e7196fce21ea3fab53abdb49297a4812ffef27"} Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.408407 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.526973 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-libvirt-combined-ca-bundle\") pod \"e4ba8316-551c-484b-b458-1feab6b0e72b\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.527072 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"e4ba8316-551c-484b-b458-1feab6b0e72b\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.527131 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-nova-combined-ca-bundle\") pod \"e4ba8316-551c-484b-b458-1feab6b0e72b\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.527188 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-inventory\") pod \"e4ba8316-551c-484b-b458-1feab6b0e72b\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.527239 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbdz4\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-kube-api-access-jbdz4\") pod \"e4ba8316-551c-484b-b458-1feab6b0e72b\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.527278 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"e4ba8316-551c-484b-b458-1feab6b0e72b\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.527315 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-telemetry-power-monitoring-combined-ca-bundle\") pod \"e4ba8316-551c-484b-b458-1feab6b0e72b\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.527335 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-ssh-key-openstack-edpm-ipam\") pod \"e4ba8316-551c-484b-b458-1feab6b0e72b\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.527388 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"e4ba8316-551c-484b-b458-1feab6b0e72b\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.527442 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-neutron-metadata-combined-ca-bundle\") pod \"e4ba8316-551c-484b-b458-1feab6b0e72b\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.527504 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-ovn-combined-ca-bundle\") pod \"e4ba8316-551c-484b-b458-1feab6b0e72b\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.527525 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-bootstrap-combined-ca-bundle\") pod \"e4ba8316-551c-484b-b458-1feab6b0e72b\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.527613 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-repo-setup-combined-ca-bundle\") pod \"e4ba8316-551c-484b-b458-1feab6b0e72b\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.527661 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-telemetry-combined-ca-bundle\") pod \"e4ba8316-551c-484b-b458-1feab6b0e72b\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.527684 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"e4ba8316-551c-484b-b458-1feab6b0e72b\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.527703 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"e4ba8316-551c-484b-b458-1feab6b0e72b\" (UID: \"e4ba8316-551c-484b-b458-1feab6b0e72b\") " Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.534176 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e4ba8316-551c-484b-b458-1feab6b0e72b" (UID: "e4ba8316-551c-484b-b458-1feab6b0e72b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.535745 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "e4ba8316-551c-484b-b458-1feab6b0e72b" (UID: "e4ba8316-551c-484b-b458-1feab6b0e72b"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.536090 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "e4ba8316-551c-484b-b458-1feab6b0e72b" (UID: "e4ba8316-551c-484b-b458-1feab6b0e72b"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.536573 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e4ba8316-551c-484b-b458-1feab6b0e72b" (UID: "e4ba8316-551c-484b-b458-1feab6b0e72b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.536776 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-kube-api-access-jbdz4" (OuterVolumeSpecName: "kube-api-access-jbdz4") pod "e4ba8316-551c-484b-b458-1feab6b0e72b" (UID: "e4ba8316-551c-484b-b458-1feab6b0e72b"). InnerVolumeSpecName "kube-api-access-jbdz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.538894 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "e4ba8316-551c-484b-b458-1feab6b0e72b" (UID: "e4ba8316-551c-484b-b458-1feab6b0e72b"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.538970 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e4ba8316-551c-484b-b458-1feab6b0e72b" (UID: "e4ba8316-551c-484b-b458-1feab6b0e72b"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.541220 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "e4ba8316-551c-484b-b458-1feab6b0e72b" (UID: "e4ba8316-551c-484b-b458-1feab6b0e72b"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.541406 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e4ba8316-551c-484b-b458-1feab6b0e72b" (UID: "e4ba8316-551c-484b-b458-1feab6b0e72b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.542038 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "e4ba8316-551c-484b-b458-1feab6b0e72b" (UID: "e4ba8316-551c-484b-b458-1feab6b0e72b"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.543298 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e4ba8316-551c-484b-b458-1feab6b0e72b" (UID: "e4ba8316-551c-484b-b458-1feab6b0e72b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.547572 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e4ba8316-551c-484b-b458-1feab6b0e72b" (UID: "e4ba8316-551c-484b-b458-1feab6b0e72b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.548076 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "e4ba8316-551c-484b-b458-1feab6b0e72b" (UID: "e4ba8316-551c-484b-b458-1feab6b0e72b"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.549691 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e4ba8316-551c-484b-b458-1feab6b0e72b" (UID: "e4ba8316-551c-484b-b458-1feab6b0e72b"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.578906 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e4ba8316-551c-484b-b458-1feab6b0e72b" (UID: "e4ba8316-551c-484b-b458-1feab6b0e72b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.589722 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-inventory" (OuterVolumeSpecName: "inventory") pod "e4ba8316-551c-484b-b458-1feab6b0e72b" (UID: "e4ba8316-551c-484b-b458-1feab6b0e72b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.632668 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbdz4\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-kube-api-access-jbdz4\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.633128 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.633145 4718 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.633160 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.633171 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.633182 4718 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.633191 4718 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.633203 4718 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.633212 4718 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.633224 4718 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.633234 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.633245 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.633258 4718 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.633271 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e4ba8316-551c-484b-b458-1feab6b0e72b-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.633283 4718 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.633294 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4ba8316-551c-484b-b458-1feab6b0e72b-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.924209 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" event={"ID":"e4ba8316-551c-484b-b458-1feab6b0e72b","Type":"ContainerDied","Data":"9325665e0f8b67edb97047beb4d9cbc1a7145384a869755c69eb3fc1e6402d3a"} Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.924255 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9325665e0f8b67edb97047beb4d9cbc1a7145384a869755c69eb3fc1e6402d3a" Jan 23 16:55:14 crc kubenswrapper[4718]: I0123 16:55:14.924270 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k98h8" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.035854 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4"] Jan 23 16:55:15 crc kubenswrapper[4718]: E0123 16:55:15.036646 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ba8316-551c-484b-b458-1feab6b0e72b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.036666 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ba8316-551c-484b-b458-1feab6b0e72b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.036898 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4ba8316-551c-484b-b458-1feab6b0e72b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.037796 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.040895 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.041003 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.042872 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.048265 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.049174 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.053240 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4"] Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.146082 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-2lgn4\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.146141 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-2lgn4\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.146214 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-2lgn4\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.146260 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-2lgn4\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.146337 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvpt8\" (UniqueName: \"kubernetes.io/projected/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-kube-api-access-rvpt8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-2lgn4\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.249143 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvpt8\" (UniqueName: \"kubernetes.io/projected/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-kube-api-access-rvpt8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-2lgn4\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.249262 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-2lgn4\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.249875 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-2lgn4\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.250027 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-2lgn4\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.250241 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-2lgn4\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.251293 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-2lgn4\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.255425 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-2lgn4\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.255575 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-2lgn4\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.260823 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-2lgn4\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.287449 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvpt8\" (UniqueName: \"kubernetes.io/projected/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-kube-api-access-rvpt8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-2lgn4\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.379282 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:55:15 crc kubenswrapper[4718]: I0123 16:55:15.962660 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4"] Jan 23 16:55:16 crc kubenswrapper[4718]: I0123 16:55:16.944653 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" event={"ID":"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3","Type":"ContainerStarted","Data":"af569e053dea049330a2705fbdb190f72f2dde575b5dd1c96ff96499b06eb045"} Jan 23 16:55:16 crc kubenswrapper[4718]: I0123 16:55:16.944969 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" event={"ID":"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3","Type":"ContainerStarted","Data":"92e68db082cfbd2cb263e4b24db9686a0b14857593cc93fa08a28aa014715a2d"} Jan 23 16:55:16 crc kubenswrapper[4718]: I0123 16:55:16.970334 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" podStartSLOduration=1.5088165390000001 podStartE2EDuration="1.97031347s" podCreationTimestamp="2026-01-23 16:55:15 +0000 UTC" firstStartedPulling="2026-01-23 16:55:15.964367211 +0000 UTC m=+2317.111609202" lastFinishedPulling="2026-01-23 16:55:16.425864142 +0000 UTC m=+2317.573106133" observedRunningTime="2026-01-23 16:55:16.96331344 +0000 UTC m=+2318.110555431" watchObservedRunningTime="2026-01-23 16:55:16.97031347 +0000 UTC m=+2318.117555471" Jan 23 16:55:58 crc kubenswrapper[4718]: I0123 16:55:58.875541 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:55:58 crc kubenswrapper[4718]: I0123 16:55:58.876154 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:56:22 crc kubenswrapper[4718]: I0123 16:56:22.689368 4718 generic.go:334] "Generic (PLEG): container finished" podID="3c8cfb53-9d77-472b-a67e-cfe479ef8aa3" containerID="af569e053dea049330a2705fbdb190f72f2dde575b5dd1c96ff96499b06eb045" exitCode=0 Jan 23 16:56:22 crc kubenswrapper[4718]: I0123 16:56:22.689919 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" event={"ID":"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3","Type":"ContainerDied","Data":"af569e053dea049330a2705fbdb190f72f2dde575b5dd1c96ff96499b06eb045"} Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.224472 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.318005 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-ovn-combined-ca-bundle\") pod \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.318261 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-ssh-key-openstack-edpm-ipam\") pod \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.318298 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvpt8\" (UniqueName: \"kubernetes.io/projected/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-kube-api-access-rvpt8\") pod \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.318324 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-inventory\") pod \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.318394 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-ovncontroller-config-0\") pod \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\" (UID: \"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3\") " Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.324459 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-kube-api-access-rvpt8" (OuterVolumeSpecName: "kube-api-access-rvpt8") pod "3c8cfb53-9d77-472b-a67e-cfe479ef8aa3" (UID: "3c8cfb53-9d77-472b-a67e-cfe479ef8aa3"). InnerVolumeSpecName "kube-api-access-rvpt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.331148 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3c8cfb53-9d77-472b-a67e-cfe479ef8aa3" (UID: "3c8cfb53-9d77-472b-a67e-cfe479ef8aa3"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.349556 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "3c8cfb53-9d77-472b-a67e-cfe479ef8aa3" (UID: "3c8cfb53-9d77-472b-a67e-cfe479ef8aa3"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.351534 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3c8cfb53-9d77-472b-a67e-cfe479ef8aa3" (UID: "3c8cfb53-9d77-472b-a67e-cfe479ef8aa3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.361111 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-inventory" (OuterVolumeSpecName: "inventory") pod "3c8cfb53-9d77-472b-a67e-cfe479ef8aa3" (UID: "3c8cfb53-9d77-472b-a67e-cfe479ef8aa3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.420759 4718 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.420826 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.420839 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvpt8\" (UniqueName: \"kubernetes.io/projected/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-kube-api-access-rvpt8\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.420855 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.420866 4718 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3c8cfb53-9d77-472b-a67e-cfe479ef8aa3-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.720592 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" event={"ID":"3c8cfb53-9d77-472b-a67e-cfe479ef8aa3","Type":"ContainerDied","Data":"92e68db082cfbd2cb263e4b24db9686a0b14857593cc93fa08a28aa014715a2d"} Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.720909 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92e68db082cfbd2cb263e4b24db9686a0b14857593cc93fa08a28aa014715a2d" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.720971 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-2lgn4" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.825562 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w"] Jan 23 16:56:24 crc kubenswrapper[4718]: E0123 16:56:24.826232 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8cfb53-9d77-472b-a67e-cfe479ef8aa3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.826261 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8cfb53-9d77-472b-a67e-cfe479ef8aa3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.826596 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c8cfb53-9d77-472b-a67e-cfe479ef8aa3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.827619 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.835875 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.836053 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.836237 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.836351 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.836563 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.840667 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.841068 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w"] Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.937026 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.937420 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.937579 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzntj\" (UniqueName: \"kubernetes.io/projected/90730357-1c99-420b-8ff6-f82638fbd43f-kube-api-access-zzntj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.937660 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.937899 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:24 crc kubenswrapper[4718]: I0123 16:56:24.938152 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:25 crc kubenswrapper[4718]: E0123 16:56:25.021277 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c8cfb53_9d77_472b_a67e_cfe479ef8aa3.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c8cfb53_9d77_472b_a67e_cfe479ef8aa3.slice/crio-92e68db082cfbd2cb263e4b24db9686a0b14857593cc93fa08a28aa014715a2d\": RecentStats: unable to find data in memory cache]" Jan 23 16:56:25 crc kubenswrapper[4718]: I0123 16:56:25.041174 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:25 crc kubenswrapper[4718]: I0123 16:56:25.041230 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzntj\" (UniqueName: \"kubernetes.io/projected/90730357-1c99-420b-8ff6-f82638fbd43f-kube-api-access-zzntj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:25 crc kubenswrapper[4718]: I0123 16:56:25.041255 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:25 crc kubenswrapper[4718]: I0123 16:56:25.041306 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:25 crc kubenswrapper[4718]: I0123 16:56:25.041367 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:25 crc kubenswrapper[4718]: I0123 16:56:25.041462 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:25 crc kubenswrapper[4718]: I0123 16:56:25.046575 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:25 crc kubenswrapper[4718]: I0123 16:56:25.046609 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:25 crc kubenswrapper[4718]: I0123 16:56:25.046887 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:25 crc kubenswrapper[4718]: I0123 16:56:25.049185 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:25 crc kubenswrapper[4718]: I0123 16:56:25.049731 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:25 crc kubenswrapper[4718]: I0123 16:56:25.065107 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzntj\" (UniqueName: \"kubernetes.io/projected/90730357-1c99-420b-8ff6-f82638fbd43f-kube-api-access-zzntj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:25 crc kubenswrapper[4718]: I0123 16:56:25.163312 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:56:25 crc kubenswrapper[4718]: I0123 16:56:25.705767 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w"] Jan 23 16:56:25 crc kubenswrapper[4718]: I0123 16:56:25.737709 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" event={"ID":"90730357-1c99-420b-8ff6-f82638fbd43f","Type":"ContainerStarted","Data":"3a83134d476f831ce0d1540c08bf5238e932066c8d0ad38e7d1c022f6a392ed6"} Jan 23 16:56:26 crc kubenswrapper[4718]: I0123 16:56:26.750870 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" event={"ID":"90730357-1c99-420b-8ff6-f82638fbd43f","Type":"ContainerStarted","Data":"1c830c047040b73e6c2b825603cc5a5b381f2fad1a25ae8459cab738dec70a07"} Jan 23 16:56:26 crc kubenswrapper[4718]: I0123 16:56:26.778130 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" podStartSLOduration=2.194456374 podStartE2EDuration="2.778114793s" podCreationTimestamp="2026-01-23 16:56:24 +0000 UTC" firstStartedPulling="2026-01-23 16:56:25.714380958 +0000 UTC m=+2386.861622949" lastFinishedPulling="2026-01-23 16:56:26.298039377 +0000 UTC m=+2387.445281368" observedRunningTime="2026-01-23 16:56:26.774111324 +0000 UTC m=+2387.921353325" watchObservedRunningTime="2026-01-23 16:56:26.778114793 +0000 UTC m=+2387.925356784" Jan 23 16:56:28 crc kubenswrapper[4718]: I0123 16:56:28.875397 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:56:28 crc kubenswrapper[4718]: I0123 16:56:28.875871 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:56:58 crc kubenswrapper[4718]: I0123 16:56:58.875655 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:56:58 crc kubenswrapper[4718]: I0123 16:56:58.876483 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:56:58 crc kubenswrapper[4718]: I0123 16:56:58.876544 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 16:56:58 crc kubenswrapper[4718]: I0123 16:56:58.878127 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 16:56:58 crc kubenswrapper[4718]: I0123 16:56:58.878195 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" gracePeriod=600 Jan 23 16:56:59 crc kubenswrapper[4718]: E0123 16:56:59.013705 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:56:59 crc kubenswrapper[4718]: I0123 16:56:59.117527 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" exitCode=0 Jan 23 16:56:59 crc kubenswrapper[4718]: I0123 16:56:59.117581 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512"} Jan 23 16:56:59 crc kubenswrapper[4718]: I0123 16:56:59.117639 4718 scope.go:117] "RemoveContainer" containerID="cd13cf13fbb6dcb61c95d462a30fb29710e34c4db432a129f54b3375b2b755c4" Jan 23 16:56:59 crc kubenswrapper[4718]: I0123 16:56:59.118940 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 16:56:59 crc kubenswrapper[4718]: E0123 16:56:59.119388 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:57:14 crc kubenswrapper[4718]: I0123 16:57:14.140869 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 16:57:14 crc kubenswrapper[4718]: E0123 16:57:14.142033 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:57:20 crc kubenswrapper[4718]: I0123 16:57:20.349702 4718 generic.go:334] "Generic (PLEG): container finished" podID="90730357-1c99-420b-8ff6-f82638fbd43f" containerID="1c830c047040b73e6c2b825603cc5a5b381f2fad1a25ae8459cab738dec70a07" exitCode=0 Jan 23 16:57:20 crc kubenswrapper[4718]: I0123 16:57:20.349768 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" event={"ID":"90730357-1c99-420b-8ff6-f82638fbd43f","Type":"ContainerDied","Data":"1c830c047040b73e6c2b825603cc5a5b381f2fad1a25ae8459cab738dec70a07"} Jan 23 16:57:21 crc kubenswrapper[4718]: I0123 16:57:21.873124 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:57:21 crc kubenswrapper[4718]: I0123 16:57:21.958235 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-neutron-metadata-combined-ca-bundle\") pod \"90730357-1c99-420b-8ff6-f82638fbd43f\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " Jan 23 16:57:21 crc kubenswrapper[4718]: I0123 16:57:21.958533 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"90730357-1c99-420b-8ff6-f82638fbd43f\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " Jan 23 16:57:21 crc kubenswrapper[4718]: I0123 16:57:21.958822 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-nova-metadata-neutron-config-0\") pod \"90730357-1c99-420b-8ff6-f82638fbd43f\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " Jan 23 16:57:21 crc kubenswrapper[4718]: I0123 16:57:21.958917 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-ssh-key-openstack-edpm-ipam\") pod \"90730357-1c99-420b-8ff6-f82638fbd43f\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " Jan 23 16:57:21 crc kubenswrapper[4718]: I0123 16:57:21.959055 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzntj\" (UniqueName: \"kubernetes.io/projected/90730357-1c99-420b-8ff6-f82638fbd43f-kube-api-access-zzntj\") pod \"90730357-1c99-420b-8ff6-f82638fbd43f\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " Jan 23 16:57:21 crc kubenswrapper[4718]: I0123 16:57:21.959245 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-inventory\") pod \"90730357-1c99-420b-8ff6-f82638fbd43f\" (UID: \"90730357-1c99-420b-8ff6-f82638fbd43f\") " Jan 23 16:57:21 crc kubenswrapper[4718]: I0123 16:57:21.964840 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "90730357-1c99-420b-8ff6-f82638fbd43f" (UID: "90730357-1c99-420b-8ff6-f82638fbd43f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:57:21 crc kubenswrapper[4718]: I0123 16:57:21.970295 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90730357-1c99-420b-8ff6-f82638fbd43f-kube-api-access-zzntj" (OuterVolumeSpecName: "kube-api-access-zzntj") pod "90730357-1c99-420b-8ff6-f82638fbd43f" (UID: "90730357-1c99-420b-8ff6-f82638fbd43f"). InnerVolumeSpecName "kube-api-access-zzntj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:57:21 crc kubenswrapper[4718]: I0123 16:57:21.994957 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-inventory" (OuterVolumeSpecName: "inventory") pod "90730357-1c99-420b-8ff6-f82638fbd43f" (UID: "90730357-1c99-420b-8ff6-f82638fbd43f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:57:21 crc kubenswrapper[4718]: I0123 16:57:21.998118 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "90730357-1c99-420b-8ff6-f82638fbd43f" (UID: "90730357-1c99-420b-8ff6-f82638fbd43f"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:57:21 crc kubenswrapper[4718]: I0123 16:57:21.999445 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "90730357-1c99-420b-8ff6-f82638fbd43f" (UID: "90730357-1c99-420b-8ff6-f82638fbd43f"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.002838 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "90730357-1c99-420b-8ff6-f82638fbd43f" (UID: "90730357-1c99-420b-8ff6-f82638fbd43f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.063131 4718 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.063179 4718 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.063189 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.063201 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzntj\" (UniqueName: \"kubernetes.io/projected/90730357-1c99-420b-8ff6-f82638fbd43f-kube-api-access-zzntj\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.063214 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.063245 4718 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90730357-1c99-420b-8ff6-f82638fbd43f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.371782 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" event={"ID":"90730357-1c99-420b-8ff6-f82638fbd43f","Type":"ContainerDied","Data":"3a83134d476f831ce0d1540c08bf5238e932066c8d0ad38e7d1c022f6a392ed6"} Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.372064 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a83134d476f831ce0d1540c08bf5238e932066c8d0ad38e7d1c022f6a392ed6" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.372114 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.502742 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv"] Jan 23 16:57:22 crc kubenswrapper[4718]: E0123 16:57:22.509570 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90730357-1c99-420b-8ff6-f82638fbd43f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.509600 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="90730357-1c99-420b-8ff6-f82638fbd43f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.509922 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="90730357-1c99-420b-8ff6-f82638fbd43f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.510784 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.512621 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.513745 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.513771 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.515474 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.517291 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.533322 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv"] Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.581262 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf47k\" (UniqueName: \"kubernetes.io/projected/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-kube-api-access-lf47k\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.581336 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.581422 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.581754 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.581823 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.685298 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.685378 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.685586 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf47k\" (UniqueName: \"kubernetes.io/projected/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-kube-api-access-lf47k\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.685727 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.685851 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.690786 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.692086 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.695607 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.706879 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf47k\" (UniqueName: \"kubernetes.io/projected/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-kube-api-access-lf47k\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.707829 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 16:57:22 crc kubenswrapper[4718]: I0123 16:57:22.833896 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 16:57:23 crc kubenswrapper[4718]: I0123 16:57:23.356891 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv"] Jan 23 16:57:23 crc kubenswrapper[4718]: I0123 16:57:23.382891 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" event={"ID":"680f27a4-945b-4f46-ae19-c0b05b6f3d4c","Type":"ContainerStarted","Data":"316309a76fb09b31228d87574cc035d5319454e53d07c28075975c09e7d94082"} Jan 23 16:57:24 crc kubenswrapper[4718]: I0123 16:57:24.394599 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" event={"ID":"680f27a4-945b-4f46-ae19-c0b05b6f3d4c","Type":"ContainerStarted","Data":"063d7de350dc0e55ed014c21b3fbd26b8ed7bbd4a108a4d2864ebdbca9bcfe08"} Jan 23 16:57:24 crc kubenswrapper[4718]: I0123 16:57:24.421930 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" podStartSLOduration=1.877277682 podStartE2EDuration="2.421912204s" podCreationTimestamp="2026-01-23 16:57:22 +0000 UTC" firstStartedPulling="2026-01-23 16:57:23.358274122 +0000 UTC m=+2444.505516113" lastFinishedPulling="2026-01-23 16:57:23.902908644 +0000 UTC m=+2445.050150635" observedRunningTime="2026-01-23 16:57:24.407948796 +0000 UTC m=+2445.555190787" watchObservedRunningTime="2026-01-23 16:57:24.421912204 +0000 UTC m=+2445.569154195" Jan 23 16:57:26 crc kubenswrapper[4718]: I0123 16:57:26.145506 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 16:57:26 crc kubenswrapper[4718]: E0123 16:57:26.147901 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:57:37 crc kubenswrapper[4718]: I0123 16:57:37.140584 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 16:57:37 crc kubenswrapper[4718]: E0123 16:57:37.141747 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:57:51 crc kubenswrapper[4718]: I0123 16:57:51.140275 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 16:57:51 crc kubenswrapper[4718]: E0123 16:57:51.141039 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:58:02 crc kubenswrapper[4718]: I0123 16:58:02.141132 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 16:58:02 crc kubenswrapper[4718]: E0123 16:58:02.141818 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:58:16 crc kubenswrapper[4718]: I0123 16:58:16.140364 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 16:58:16 crc kubenswrapper[4718]: E0123 16:58:16.141253 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:58:27 crc kubenswrapper[4718]: I0123 16:58:27.141120 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 16:58:27 crc kubenswrapper[4718]: E0123 16:58:27.141926 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:58:42 crc kubenswrapper[4718]: I0123 16:58:42.141724 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 16:58:42 crc kubenswrapper[4718]: E0123 16:58:42.142447 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:58:57 crc kubenswrapper[4718]: I0123 16:58:57.140340 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 16:58:57 crc kubenswrapper[4718]: E0123 16:58:57.141300 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:59:08 crc kubenswrapper[4718]: I0123 16:59:08.140724 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 16:59:08 crc kubenswrapper[4718]: E0123 16:59:08.141722 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:59:21 crc kubenswrapper[4718]: I0123 16:59:21.140854 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 16:59:21 crc kubenswrapper[4718]: E0123 16:59:21.142509 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:59:36 crc kubenswrapper[4718]: I0123 16:59:36.140928 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 16:59:36 crc kubenswrapper[4718]: E0123 16:59:36.142533 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 16:59:51 crc kubenswrapper[4718]: I0123 16:59:51.140766 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 16:59:51 crc kubenswrapper[4718]: E0123 16:59:51.141609 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:00:00 crc kubenswrapper[4718]: I0123 17:00:00.155673 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9"] Jan 23 17:00:00 crc kubenswrapper[4718]: I0123 17:00:00.158361 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9" Jan 23 17:00:00 crc kubenswrapper[4718]: I0123 17:00:00.160395 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 17:00:00 crc kubenswrapper[4718]: I0123 17:00:00.161740 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 17:00:00 crc kubenswrapper[4718]: I0123 17:00:00.181744 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9"] Jan 23 17:00:00 crc kubenswrapper[4718]: I0123 17:00:00.340095 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c19fb3d0-e348-47e4-8318-677140740104-config-volume\") pod \"collect-profiles-29486460-kk6n9\" (UID: \"c19fb3d0-e348-47e4-8318-677140740104\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9" Jan 23 17:00:00 crc kubenswrapper[4718]: I0123 17:00:00.340461 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92lfq\" (UniqueName: \"kubernetes.io/projected/c19fb3d0-e348-47e4-8318-677140740104-kube-api-access-92lfq\") pod \"collect-profiles-29486460-kk6n9\" (UID: \"c19fb3d0-e348-47e4-8318-677140740104\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9" Jan 23 17:00:00 crc kubenswrapper[4718]: I0123 17:00:00.340639 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c19fb3d0-e348-47e4-8318-677140740104-secret-volume\") pod \"collect-profiles-29486460-kk6n9\" (UID: \"c19fb3d0-e348-47e4-8318-677140740104\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9" Jan 23 17:00:00 crc kubenswrapper[4718]: I0123 17:00:00.442203 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c19fb3d0-e348-47e4-8318-677140740104-config-volume\") pod \"collect-profiles-29486460-kk6n9\" (UID: \"c19fb3d0-e348-47e4-8318-677140740104\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9" Jan 23 17:00:00 crc kubenswrapper[4718]: I0123 17:00:00.442380 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92lfq\" (UniqueName: \"kubernetes.io/projected/c19fb3d0-e348-47e4-8318-677140740104-kube-api-access-92lfq\") pod \"collect-profiles-29486460-kk6n9\" (UID: \"c19fb3d0-e348-47e4-8318-677140740104\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9" Jan 23 17:00:00 crc kubenswrapper[4718]: I0123 17:00:00.442474 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c19fb3d0-e348-47e4-8318-677140740104-secret-volume\") pod \"collect-profiles-29486460-kk6n9\" (UID: \"c19fb3d0-e348-47e4-8318-677140740104\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9" Jan 23 17:00:00 crc kubenswrapper[4718]: I0123 17:00:00.447463 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c19fb3d0-e348-47e4-8318-677140740104-config-volume\") pod \"collect-profiles-29486460-kk6n9\" (UID: \"c19fb3d0-e348-47e4-8318-677140740104\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9" Jan 23 17:00:00 crc kubenswrapper[4718]: I0123 17:00:00.452237 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c19fb3d0-e348-47e4-8318-677140740104-secret-volume\") pod \"collect-profiles-29486460-kk6n9\" (UID: \"c19fb3d0-e348-47e4-8318-677140740104\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9" Jan 23 17:00:00 crc kubenswrapper[4718]: I0123 17:00:00.466124 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92lfq\" (UniqueName: \"kubernetes.io/projected/c19fb3d0-e348-47e4-8318-677140740104-kube-api-access-92lfq\") pod \"collect-profiles-29486460-kk6n9\" (UID: \"c19fb3d0-e348-47e4-8318-677140740104\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9" Jan 23 17:00:00 crc kubenswrapper[4718]: I0123 17:00:00.482578 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9" Jan 23 17:00:00 crc kubenswrapper[4718]: I0123 17:00:00.971340 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9"] Jan 23 17:00:01 crc kubenswrapper[4718]: I0123 17:00:01.407605 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9" event={"ID":"c19fb3d0-e348-47e4-8318-677140740104","Type":"ContainerStarted","Data":"f1a85edb1871cf8576f9b933c3473792199113e2020f9154cd5c1f5c540f2684"} Jan 23 17:00:01 crc kubenswrapper[4718]: I0123 17:00:01.407924 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9" event={"ID":"c19fb3d0-e348-47e4-8318-677140740104","Type":"ContainerStarted","Data":"f6339caa1b29d85c1cc1bb59304c1ffbed341c10cec976f374326f01b5645ef7"} Jan 23 17:00:01 crc kubenswrapper[4718]: I0123 17:00:01.438171 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9" podStartSLOduration=1.4381520349999999 podStartE2EDuration="1.438152035s" podCreationTimestamp="2026-01-23 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:00:01.431168327 +0000 UTC m=+2602.578410328" watchObservedRunningTime="2026-01-23 17:00:01.438152035 +0000 UTC m=+2602.585394026" Jan 23 17:00:02 crc kubenswrapper[4718]: I0123 17:00:02.141809 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 17:00:02 crc kubenswrapper[4718]: E0123 17:00:02.142442 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:00:02 crc kubenswrapper[4718]: I0123 17:00:02.420007 4718 generic.go:334] "Generic (PLEG): container finished" podID="c19fb3d0-e348-47e4-8318-677140740104" containerID="f1a85edb1871cf8576f9b933c3473792199113e2020f9154cd5c1f5c540f2684" exitCode=0 Jan 23 17:00:02 crc kubenswrapper[4718]: I0123 17:00:02.420047 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9" event={"ID":"c19fb3d0-e348-47e4-8318-677140740104","Type":"ContainerDied","Data":"f1a85edb1871cf8576f9b933c3473792199113e2020f9154cd5c1f5c540f2684"} Jan 23 17:00:03 crc kubenswrapper[4718]: I0123 17:00:03.936758 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9" Jan 23 17:00:04 crc kubenswrapper[4718]: I0123 17:00:04.036483 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92lfq\" (UniqueName: \"kubernetes.io/projected/c19fb3d0-e348-47e4-8318-677140740104-kube-api-access-92lfq\") pod \"c19fb3d0-e348-47e4-8318-677140740104\" (UID: \"c19fb3d0-e348-47e4-8318-677140740104\") " Jan 23 17:00:04 crc kubenswrapper[4718]: I0123 17:00:04.037090 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c19fb3d0-e348-47e4-8318-677140740104-config-volume\") pod \"c19fb3d0-e348-47e4-8318-677140740104\" (UID: \"c19fb3d0-e348-47e4-8318-677140740104\") " Jan 23 17:00:04 crc kubenswrapper[4718]: I0123 17:00:04.037339 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c19fb3d0-e348-47e4-8318-677140740104-secret-volume\") pod \"c19fb3d0-e348-47e4-8318-677140740104\" (UID: \"c19fb3d0-e348-47e4-8318-677140740104\") " Jan 23 17:00:04 crc kubenswrapper[4718]: I0123 17:00:04.037825 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c19fb3d0-e348-47e4-8318-677140740104-config-volume" (OuterVolumeSpecName: "config-volume") pod "c19fb3d0-e348-47e4-8318-677140740104" (UID: "c19fb3d0-e348-47e4-8318-677140740104"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:00:04 crc kubenswrapper[4718]: I0123 17:00:04.038575 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c19fb3d0-e348-47e4-8318-677140740104-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 17:00:04 crc kubenswrapper[4718]: I0123 17:00:04.043040 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19fb3d0-e348-47e4-8318-677140740104-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c19fb3d0-e348-47e4-8318-677140740104" (UID: "c19fb3d0-e348-47e4-8318-677140740104"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:00:04 crc kubenswrapper[4718]: I0123 17:00:04.043590 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c19fb3d0-e348-47e4-8318-677140740104-kube-api-access-92lfq" (OuterVolumeSpecName: "kube-api-access-92lfq") pod "c19fb3d0-e348-47e4-8318-677140740104" (UID: "c19fb3d0-e348-47e4-8318-677140740104"). InnerVolumeSpecName "kube-api-access-92lfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:00:04 crc kubenswrapper[4718]: I0123 17:00:04.142919 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c19fb3d0-e348-47e4-8318-677140740104-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 17:00:04 crc kubenswrapper[4718]: I0123 17:00:04.142957 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92lfq\" (UniqueName: \"kubernetes.io/projected/c19fb3d0-e348-47e4-8318-677140740104-kube-api-access-92lfq\") on node \"crc\" DevicePath \"\"" Jan 23 17:00:04 crc kubenswrapper[4718]: I0123 17:00:04.441676 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9" event={"ID":"c19fb3d0-e348-47e4-8318-677140740104","Type":"ContainerDied","Data":"f6339caa1b29d85c1cc1bb59304c1ffbed341c10cec976f374326f01b5645ef7"} Jan 23 17:00:04 crc kubenswrapper[4718]: I0123 17:00:04.441955 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6339caa1b29d85c1cc1bb59304c1ffbed341c10cec976f374326f01b5645ef7" Jan 23 17:00:04 crc kubenswrapper[4718]: I0123 17:00:04.441719 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9" Jan 23 17:00:04 crc kubenswrapper[4718]: I0123 17:00:04.507990 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c"] Jan 23 17:00:04 crc kubenswrapper[4718]: I0123 17:00:04.517897 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486415-hw59c"] Jan 23 17:00:05 crc kubenswrapper[4718]: I0123 17:00:05.156379 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74655295-4c96-4870-b700-b98b7a1e176e" path="/var/lib/kubelet/pods/74655295-4c96-4870-b700-b98b7a1e176e/volumes" Jan 23 17:00:16 crc kubenswrapper[4718]: I0123 17:00:16.140969 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 17:00:16 crc kubenswrapper[4718]: E0123 17:00:16.142101 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:00:29 crc kubenswrapper[4718]: I0123 17:00:29.141370 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 17:00:29 crc kubenswrapper[4718]: E0123 17:00:29.142249 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:00:41 crc kubenswrapper[4718]: I0123 17:00:41.141029 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 17:00:41 crc kubenswrapper[4718]: E0123 17:00:41.141760 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:00:53 crc kubenswrapper[4718]: I0123 17:00:53.140876 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 17:00:53 crc kubenswrapper[4718]: E0123 17:00:53.141669 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:01:00 crc kubenswrapper[4718]: I0123 17:01:00.151841 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29486461-g5844"] Jan 23 17:01:00 crc kubenswrapper[4718]: E0123 17:01:00.153018 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c19fb3d0-e348-47e4-8318-677140740104" containerName="collect-profiles" Jan 23 17:01:00 crc kubenswrapper[4718]: I0123 17:01:00.153036 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c19fb3d0-e348-47e4-8318-677140740104" containerName="collect-profiles" Jan 23 17:01:00 crc kubenswrapper[4718]: I0123 17:01:00.153310 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c19fb3d0-e348-47e4-8318-677140740104" containerName="collect-profiles" Jan 23 17:01:00 crc kubenswrapper[4718]: I0123 17:01:00.154118 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486461-g5844" Jan 23 17:01:00 crc kubenswrapper[4718]: I0123 17:01:00.166175 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486461-g5844"] Jan 23 17:01:00 crc kubenswrapper[4718]: I0123 17:01:00.277551 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f29f7cb-356b-4f33-a5db-2b6977793db4-combined-ca-bundle\") pod \"keystone-cron-29486461-g5844\" (UID: \"5f29f7cb-356b-4f33-a5db-2b6977793db4\") " pod="openstack/keystone-cron-29486461-g5844" Jan 23 17:01:00 crc kubenswrapper[4718]: I0123 17:01:00.277623 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f29f7cb-356b-4f33-a5db-2b6977793db4-config-data\") pod \"keystone-cron-29486461-g5844\" (UID: \"5f29f7cb-356b-4f33-a5db-2b6977793db4\") " pod="openstack/keystone-cron-29486461-g5844" Jan 23 17:01:00 crc kubenswrapper[4718]: I0123 17:01:00.277701 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjrq5\" (UniqueName: \"kubernetes.io/projected/5f29f7cb-356b-4f33-a5db-2b6977793db4-kube-api-access-kjrq5\") pod \"keystone-cron-29486461-g5844\" (UID: \"5f29f7cb-356b-4f33-a5db-2b6977793db4\") " pod="openstack/keystone-cron-29486461-g5844" Jan 23 17:01:00 crc kubenswrapper[4718]: I0123 17:01:00.277797 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f29f7cb-356b-4f33-a5db-2b6977793db4-fernet-keys\") pod \"keystone-cron-29486461-g5844\" (UID: \"5f29f7cb-356b-4f33-a5db-2b6977793db4\") " pod="openstack/keystone-cron-29486461-g5844" Jan 23 17:01:00 crc kubenswrapper[4718]: I0123 17:01:00.381154 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f29f7cb-356b-4f33-a5db-2b6977793db4-combined-ca-bundle\") pod \"keystone-cron-29486461-g5844\" (UID: \"5f29f7cb-356b-4f33-a5db-2b6977793db4\") " pod="openstack/keystone-cron-29486461-g5844" Jan 23 17:01:00 crc kubenswrapper[4718]: I0123 17:01:00.381204 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f29f7cb-356b-4f33-a5db-2b6977793db4-config-data\") pod \"keystone-cron-29486461-g5844\" (UID: \"5f29f7cb-356b-4f33-a5db-2b6977793db4\") " pod="openstack/keystone-cron-29486461-g5844" Jan 23 17:01:00 crc kubenswrapper[4718]: I0123 17:01:00.381257 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjrq5\" (UniqueName: \"kubernetes.io/projected/5f29f7cb-356b-4f33-a5db-2b6977793db4-kube-api-access-kjrq5\") pod \"keystone-cron-29486461-g5844\" (UID: \"5f29f7cb-356b-4f33-a5db-2b6977793db4\") " pod="openstack/keystone-cron-29486461-g5844" Jan 23 17:01:00 crc kubenswrapper[4718]: I0123 17:01:00.381320 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f29f7cb-356b-4f33-a5db-2b6977793db4-fernet-keys\") pod \"keystone-cron-29486461-g5844\" (UID: \"5f29f7cb-356b-4f33-a5db-2b6977793db4\") " pod="openstack/keystone-cron-29486461-g5844" Jan 23 17:01:00 crc kubenswrapper[4718]: I0123 17:01:00.392711 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f29f7cb-356b-4f33-a5db-2b6977793db4-fernet-keys\") pod \"keystone-cron-29486461-g5844\" (UID: \"5f29f7cb-356b-4f33-a5db-2b6977793db4\") " pod="openstack/keystone-cron-29486461-g5844" Jan 23 17:01:00 crc kubenswrapper[4718]: I0123 17:01:00.397789 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f29f7cb-356b-4f33-a5db-2b6977793db4-config-data\") pod \"keystone-cron-29486461-g5844\" (UID: \"5f29f7cb-356b-4f33-a5db-2b6977793db4\") " pod="openstack/keystone-cron-29486461-g5844" Jan 23 17:01:00 crc kubenswrapper[4718]: I0123 17:01:00.399093 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f29f7cb-356b-4f33-a5db-2b6977793db4-combined-ca-bundle\") pod \"keystone-cron-29486461-g5844\" (UID: \"5f29f7cb-356b-4f33-a5db-2b6977793db4\") " pod="openstack/keystone-cron-29486461-g5844" Jan 23 17:01:00 crc kubenswrapper[4718]: I0123 17:01:00.400266 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjrq5\" (UniqueName: \"kubernetes.io/projected/5f29f7cb-356b-4f33-a5db-2b6977793db4-kube-api-access-kjrq5\") pod \"keystone-cron-29486461-g5844\" (UID: \"5f29f7cb-356b-4f33-a5db-2b6977793db4\") " pod="openstack/keystone-cron-29486461-g5844" Jan 23 17:01:00 crc kubenswrapper[4718]: I0123 17:01:00.531731 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486461-g5844" Jan 23 17:01:01 crc kubenswrapper[4718]: I0123 17:01:01.021884 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486461-g5844"] Jan 23 17:01:01 crc kubenswrapper[4718]: I0123 17:01:01.096333 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486461-g5844" event={"ID":"5f29f7cb-356b-4f33-a5db-2b6977793db4","Type":"ContainerStarted","Data":"07a932c91824e869c36652f58858e7caf13a24714ebf91b00e7dbdb46d5fc1e0"} Jan 23 17:01:02 crc kubenswrapper[4718]: I0123 17:01:02.106827 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486461-g5844" event={"ID":"5f29f7cb-356b-4f33-a5db-2b6977793db4","Type":"ContainerStarted","Data":"b960810bdf4028482c410190bdb5449fdc0920c256b33d10b181219bd1cba20f"} Jan 23 17:01:02 crc kubenswrapper[4718]: I0123 17:01:02.131652 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29486461-g5844" podStartSLOduration=2.131612696 podStartE2EDuration="2.131612696s" podCreationTimestamp="2026-01-23 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:01:02.122731005 +0000 UTC m=+2663.269972986" watchObservedRunningTime="2026-01-23 17:01:02.131612696 +0000 UTC m=+2663.278854687" Jan 23 17:01:02 crc kubenswrapper[4718]: I0123 17:01:02.134685 4718 scope.go:117] "RemoveContainer" containerID="9b9e6c4a115db5a667ad483c1897b85c47d26e31cc7bc4778b8dab16f6b22381" Jan 23 17:01:04 crc kubenswrapper[4718]: I0123 17:01:04.155828 4718 generic.go:334] "Generic (PLEG): container finished" podID="5f29f7cb-356b-4f33-a5db-2b6977793db4" containerID="b960810bdf4028482c410190bdb5449fdc0920c256b33d10b181219bd1cba20f" exitCode=0 Jan 23 17:01:04 crc kubenswrapper[4718]: I0123 17:01:04.155916 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486461-g5844" event={"ID":"5f29f7cb-356b-4f33-a5db-2b6977793db4","Type":"ContainerDied","Data":"b960810bdf4028482c410190bdb5449fdc0920c256b33d10b181219bd1cba20f"} Jan 23 17:01:05 crc kubenswrapper[4718]: I0123 17:01:05.140520 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 17:01:05 crc kubenswrapper[4718]: E0123 17:01:05.141137 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:01:05 crc kubenswrapper[4718]: I0123 17:01:05.567177 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486461-g5844" Jan 23 17:01:05 crc kubenswrapper[4718]: I0123 17:01:05.651605 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f29f7cb-356b-4f33-a5db-2b6977793db4-fernet-keys\") pod \"5f29f7cb-356b-4f33-a5db-2b6977793db4\" (UID: \"5f29f7cb-356b-4f33-a5db-2b6977793db4\") " Jan 23 17:01:05 crc kubenswrapper[4718]: I0123 17:01:05.651756 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f29f7cb-356b-4f33-a5db-2b6977793db4-combined-ca-bundle\") pod \"5f29f7cb-356b-4f33-a5db-2b6977793db4\" (UID: \"5f29f7cb-356b-4f33-a5db-2b6977793db4\") " Jan 23 17:01:05 crc kubenswrapper[4718]: I0123 17:01:05.651994 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f29f7cb-356b-4f33-a5db-2b6977793db4-config-data\") pod \"5f29f7cb-356b-4f33-a5db-2b6977793db4\" (UID: \"5f29f7cb-356b-4f33-a5db-2b6977793db4\") " Jan 23 17:01:05 crc kubenswrapper[4718]: I0123 17:01:05.652073 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjrq5\" (UniqueName: \"kubernetes.io/projected/5f29f7cb-356b-4f33-a5db-2b6977793db4-kube-api-access-kjrq5\") pod \"5f29f7cb-356b-4f33-a5db-2b6977793db4\" (UID: \"5f29f7cb-356b-4f33-a5db-2b6977793db4\") " Jan 23 17:01:05 crc kubenswrapper[4718]: I0123 17:01:05.657558 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f29f7cb-356b-4f33-a5db-2b6977793db4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5f29f7cb-356b-4f33-a5db-2b6977793db4" (UID: "5f29f7cb-356b-4f33-a5db-2b6977793db4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:01:05 crc kubenswrapper[4718]: I0123 17:01:05.660299 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f29f7cb-356b-4f33-a5db-2b6977793db4-kube-api-access-kjrq5" (OuterVolumeSpecName: "kube-api-access-kjrq5") pod "5f29f7cb-356b-4f33-a5db-2b6977793db4" (UID: "5f29f7cb-356b-4f33-a5db-2b6977793db4"). InnerVolumeSpecName "kube-api-access-kjrq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:01:05 crc kubenswrapper[4718]: I0123 17:01:05.688576 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f29f7cb-356b-4f33-a5db-2b6977793db4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f29f7cb-356b-4f33-a5db-2b6977793db4" (UID: "5f29f7cb-356b-4f33-a5db-2b6977793db4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:01:05 crc kubenswrapper[4718]: I0123 17:01:05.715053 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f29f7cb-356b-4f33-a5db-2b6977793db4-config-data" (OuterVolumeSpecName: "config-data") pod "5f29f7cb-356b-4f33-a5db-2b6977793db4" (UID: "5f29f7cb-356b-4f33-a5db-2b6977793db4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:01:05 crc kubenswrapper[4718]: I0123 17:01:05.757289 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f29f7cb-356b-4f33-a5db-2b6977793db4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:01:05 crc kubenswrapper[4718]: I0123 17:01:05.757322 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f29f7cb-356b-4f33-a5db-2b6977793db4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:01:05 crc kubenswrapper[4718]: I0123 17:01:05.757331 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjrq5\" (UniqueName: \"kubernetes.io/projected/5f29f7cb-356b-4f33-a5db-2b6977793db4-kube-api-access-kjrq5\") on node \"crc\" DevicePath \"\"" Jan 23 17:01:05 crc kubenswrapper[4718]: I0123 17:01:05.757342 4718 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f29f7cb-356b-4f33-a5db-2b6977793db4-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 17:01:06 crc kubenswrapper[4718]: I0123 17:01:06.180230 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486461-g5844" event={"ID":"5f29f7cb-356b-4f33-a5db-2b6977793db4","Type":"ContainerDied","Data":"07a932c91824e869c36652f58858e7caf13a24714ebf91b00e7dbdb46d5fc1e0"} Jan 23 17:01:06 crc kubenswrapper[4718]: I0123 17:01:06.180291 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07a932c91824e869c36652f58858e7caf13a24714ebf91b00e7dbdb46d5fc1e0" Jan 23 17:01:06 crc kubenswrapper[4718]: I0123 17:01:06.180343 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486461-g5844" Jan 23 17:01:16 crc kubenswrapper[4718]: I0123 17:01:16.140908 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 17:01:16 crc kubenswrapper[4718]: E0123 17:01:16.141706 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:01:30 crc kubenswrapper[4718]: I0123 17:01:30.141055 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 17:01:30 crc kubenswrapper[4718]: E0123 17:01:30.141974 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:01:37 crc kubenswrapper[4718]: I0123 17:01:37.367241 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vjb9l"] Jan 23 17:01:37 crc kubenswrapper[4718]: E0123 17:01:37.369215 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f29f7cb-356b-4f33-a5db-2b6977793db4" containerName="keystone-cron" Jan 23 17:01:37 crc kubenswrapper[4718]: I0123 17:01:37.369303 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f29f7cb-356b-4f33-a5db-2b6977793db4" containerName="keystone-cron" Jan 23 17:01:37 crc kubenswrapper[4718]: I0123 17:01:37.369663 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f29f7cb-356b-4f33-a5db-2b6977793db4" containerName="keystone-cron" Jan 23 17:01:37 crc kubenswrapper[4718]: I0123 17:01:37.371652 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjb9l" Jan 23 17:01:37 crc kubenswrapper[4718]: I0123 17:01:37.381364 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjb9l"] Jan 23 17:01:37 crc kubenswrapper[4718]: I0123 17:01:37.467031 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ae139e-07d1-4e3c-8340-c7f9ad40165c-catalog-content\") pod \"redhat-marketplace-vjb9l\" (UID: \"b8ae139e-07d1-4e3c-8340-c7f9ad40165c\") " pod="openshift-marketplace/redhat-marketplace-vjb9l" Jan 23 17:01:37 crc kubenswrapper[4718]: I0123 17:01:37.467105 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf5hv\" (UniqueName: \"kubernetes.io/projected/b8ae139e-07d1-4e3c-8340-c7f9ad40165c-kube-api-access-bf5hv\") pod \"redhat-marketplace-vjb9l\" (UID: \"b8ae139e-07d1-4e3c-8340-c7f9ad40165c\") " pod="openshift-marketplace/redhat-marketplace-vjb9l" Jan 23 17:01:37 crc kubenswrapper[4718]: I0123 17:01:37.467199 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ae139e-07d1-4e3c-8340-c7f9ad40165c-utilities\") pod \"redhat-marketplace-vjb9l\" (UID: \"b8ae139e-07d1-4e3c-8340-c7f9ad40165c\") " pod="openshift-marketplace/redhat-marketplace-vjb9l" Jan 23 17:01:37 crc kubenswrapper[4718]: I0123 17:01:37.569433 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ae139e-07d1-4e3c-8340-c7f9ad40165c-catalog-content\") pod \"redhat-marketplace-vjb9l\" (UID: \"b8ae139e-07d1-4e3c-8340-c7f9ad40165c\") " pod="openshift-marketplace/redhat-marketplace-vjb9l" Jan 23 17:01:37 crc kubenswrapper[4718]: I0123 17:01:37.569502 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf5hv\" (UniqueName: \"kubernetes.io/projected/b8ae139e-07d1-4e3c-8340-c7f9ad40165c-kube-api-access-bf5hv\") pod \"redhat-marketplace-vjb9l\" (UID: \"b8ae139e-07d1-4e3c-8340-c7f9ad40165c\") " pod="openshift-marketplace/redhat-marketplace-vjb9l" Jan 23 17:01:37 crc kubenswrapper[4718]: I0123 17:01:37.569566 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ae139e-07d1-4e3c-8340-c7f9ad40165c-utilities\") pod \"redhat-marketplace-vjb9l\" (UID: \"b8ae139e-07d1-4e3c-8340-c7f9ad40165c\") " pod="openshift-marketplace/redhat-marketplace-vjb9l" Jan 23 17:01:37 crc kubenswrapper[4718]: I0123 17:01:37.570089 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ae139e-07d1-4e3c-8340-c7f9ad40165c-utilities\") pod \"redhat-marketplace-vjb9l\" (UID: \"b8ae139e-07d1-4e3c-8340-c7f9ad40165c\") " pod="openshift-marketplace/redhat-marketplace-vjb9l" Jan 23 17:01:37 crc kubenswrapper[4718]: I0123 17:01:37.570372 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ae139e-07d1-4e3c-8340-c7f9ad40165c-catalog-content\") pod \"redhat-marketplace-vjb9l\" (UID: \"b8ae139e-07d1-4e3c-8340-c7f9ad40165c\") " pod="openshift-marketplace/redhat-marketplace-vjb9l" Jan 23 17:01:37 crc kubenswrapper[4718]: I0123 17:01:37.588092 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf5hv\" (UniqueName: \"kubernetes.io/projected/b8ae139e-07d1-4e3c-8340-c7f9ad40165c-kube-api-access-bf5hv\") pod \"redhat-marketplace-vjb9l\" (UID: \"b8ae139e-07d1-4e3c-8340-c7f9ad40165c\") " pod="openshift-marketplace/redhat-marketplace-vjb9l" Jan 23 17:01:37 crc kubenswrapper[4718]: I0123 17:01:37.690754 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjb9l" Jan 23 17:01:38 crc kubenswrapper[4718]: I0123 17:01:38.280298 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjb9l"] Jan 23 17:01:38 crc kubenswrapper[4718]: I0123 17:01:38.546140 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjb9l" event={"ID":"b8ae139e-07d1-4e3c-8340-c7f9ad40165c","Type":"ContainerStarted","Data":"ae8636fc7509a633b9a17e9e23eab61a02dd7e5046ae5ccd36fd1715d8e3afcd"} Jan 23 17:01:39 crc kubenswrapper[4718]: I0123 17:01:39.559237 4718 generic.go:334] "Generic (PLEG): container finished" podID="b8ae139e-07d1-4e3c-8340-c7f9ad40165c" containerID="cbcf151fbf8227ca1f2ca85f6360c2db9d0253cf4ad53fc8483377c7d04a25b5" exitCode=0 Jan 23 17:01:39 crc kubenswrapper[4718]: I0123 17:01:39.559577 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjb9l" event={"ID":"b8ae139e-07d1-4e3c-8340-c7f9ad40165c","Type":"ContainerDied","Data":"cbcf151fbf8227ca1f2ca85f6360c2db9d0253cf4ad53fc8483377c7d04a25b5"} Jan 23 17:01:39 crc kubenswrapper[4718]: I0123 17:01:39.562517 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:01:40 crc kubenswrapper[4718]: I0123 17:01:40.571373 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjb9l" event={"ID":"b8ae139e-07d1-4e3c-8340-c7f9ad40165c","Type":"ContainerStarted","Data":"0340ca17438b2f220364827989339d88f6d4821b734d8bbdc77903bdac171e48"} Jan 23 17:01:41 crc kubenswrapper[4718]: I0123 17:01:41.584412 4718 generic.go:334] "Generic (PLEG): container finished" podID="b8ae139e-07d1-4e3c-8340-c7f9ad40165c" containerID="0340ca17438b2f220364827989339d88f6d4821b734d8bbdc77903bdac171e48" exitCode=0 Jan 23 17:01:41 crc kubenswrapper[4718]: I0123 17:01:41.584481 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjb9l" event={"ID":"b8ae139e-07d1-4e3c-8340-c7f9ad40165c","Type":"ContainerDied","Data":"0340ca17438b2f220364827989339d88f6d4821b734d8bbdc77903bdac171e48"} Jan 23 17:01:42 crc kubenswrapper[4718]: I0123 17:01:42.142282 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 17:01:42 crc kubenswrapper[4718]: E0123 17:01:42.142612 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:01:42 crc kubenswrapper[4718]: I0123 17:01:42.605483 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjb9l" event={"ID":"b8ae139e-07d1-4e3c-8340-c7f9ad40165c","Type":"ContainerStarted","Data":"5311770255f6989ec1e06c27779cd610355a03d85d086a90a685a1b9638953bf"} Jan 23 17:01:42 crc kubenswrapper[4718]: I0123 17:01:42.636158 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vjb9l" podStartSLOduration=3.151661542 podStartE2EDuration="5.636121073s" podCreationTimestamp="2026-01-23 17:01:37 +0000 UTC" firstStartedPulling="2026-01-23 17:01:39.562282067 +0000 UTC m=+2700.709524058" lastFinishedPulling="2026-01-23 17:01:42.046741598 +0000 UTC m=+2703.193983589" observedRunningTime="2026-01-23 17:01:42.627617273 +0000 UTC m=+2703.774859314" watchObservedRunningTime="2026-01-23 17:01:42.636121073 +0000 UTC m=+2703.783363074" Jan 23 17:01:43 crc kubenswrapper[4718]: I0123 17:01:43.723453 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bvzrf"] Jan 23 17:01:43 crc kubenswrapper[4718]: I0123 17:01:43.728382 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bvzrf" Jan 23 17:01:43 crc kubenswrapper[4718]: I0123 17:01:43.765708 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bvzrf"] Jan 23 17:01:43 crc kubenswrapper[4718]: I0123 17:01:43.869001 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b47454b-eab3-43e0-b594-d36f2d8b4834-catalog-content\") pod \"redhat-operators-bvzrf\" (UID: \"2b47454b-eab3-43e0-b594-d36f2d8b4834\") " pod="openshift-marketplace/redhat-operators-bvzrf" Jan 23 17:01:43 crc kubenswrapper[4718]: I0123 17:01:43.869084 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b47454b-eab3-43e0-b594-d36f2d8b4834-utilities\") pod \"redhat-operators-bvzrf\" (UID: \"2b47454b-eab3-43e0-b594-d36f2d8b4834\") " pod="openshift-marketplace/redhat-operators-bvzrf" Jan 23 17:01:43 crc kubenswrapper[4718]: I0123 17:01:43.869692 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgppc\" (UniqueName: \"kubernetes.io/projected/2b47454b-eab3-43e0-b594-d36f2d8b4834-kube-api-access-bgppc\") pod \"redhat-operators-bvzrf\" (UID: \"2b47454b-eab3-43e0-b594-d36f2d8b4834\") " pod="openshift-marketplace/redhat-operators-bvzrf" Jan 23 17:01:43 crc kubenswrapper[4718]: I0123 17:01:43.971879 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgppc\" (UniqueName: \"kubernetes.io/projected/2b47454b-eab3-43e0-b594-d36f2d8b4834-kube-api-access-bgppc\") pod \"redhat-operators-bvzrf\" (UID: \"2b47454b-eab3-43e0-b594-d36f2d8b4834\") " pod="openshift-marketplace/redhat-operators-bvzrf" Jan 23 17:01:43 crc kubenswrapper[4718]: I0123 17:01:43.972259 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b47454b-eab3-43e0-b594-d36f2d8b4834-catalog-content\") pod \"redhat-operators-bvzrf\" (UID: \"2b47454b-eab3-43e0-b594-d36f2d8b4834\") " pod="openshift-marketplace/redhat-operators-bvzrf" Jan 23 17:01:43 crc kubenswrapper[4718]: I0123 17:01:43.972355 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b47454b-eab3-43e0-b594-d36f2d8b4834-utilities\") pod \"redhat-operators-bvzrf\" (UID: \"2b47454b-eab3-43e0-b594-d36f2d8b4834\") " pod="openshift-marketplace/redhat-operators-bvzrf" Jan 23 17:01:43 crc kubenswrapper[4718]: I0123 17:01:43.972717 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b47454b-eab3-43e0-b594-d36f2d8b4834-catalog-content\") pod \"redhat-operators-bvzrf\" (UID: \"2b47454b-eab3-43e0-b594-d36f2d8b4834\") " pod="openshift-marketplace/redhat-operators-bvzrf" Jan 23 17:01:43 crc kubenswrapper[4718]: I0123 17:01:43.972788 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b47454b-eab3-43e0-b594-d36f2d8b4834-utilities\") pod \"redhat-operators-bvzrf\" (UID: \"2b47454b-eab3-43e0-b594-d36f2d8b4834\") " pod="openshift-marketplace/redhat-operators-bvzrf" Jan 23 17:01:43 crc kubenswrapper[4718]: I0123 17:01:43.991699 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgppc\" (UniqueName: \"kubernetes.io/projected/2b47454b-eab3-43e0-b594-d36f2d8b4834-kube-api-access-bgppc\") pod \"redhat-operators-bvzrf\" (UID: \"2b47454b-eab3-43e0-b594-d36f2d8b4834\") " pod="openshift-marketplace/redhat-operators-bvzrf" Jan 23 17:01:44 crc kubenswrapper[4718]: I0123 17:01:44.047117 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bvzrf" Jan 23 17:01:44 crc kubenswrapper[4718]: W0123 17:01:44.641451 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b47454b_eab3_43e0_b594_d36f2d8b4834.slice/crio-2016e9f0c2014324ca8c6910b08843792c9abe6fcf3ed9f42843d49445c3f1cc WatchSource:0}: Error finding container 2016e9f0c2014324ca8c6910b08843792c9abe6fcf3ed9f42843d49445c3f1cc: Status 404 returned error can't find the container with id 2016e9f0c2014324ca8c6910b08843792c9abe6fcf3ed9f42843d49445c3f1cc Jan 23 17:01:44 crc kubenswrapper[4718]: I0123 17:01:44.644414 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bvzrf"] Jan 23 17:01:45 crc kubenswrapper[4718]: I0123 17:01:45.636445 4718 generic.go:334] "Generic (PLEG): container finished" podID="2b47454b-eab3-43e0-b594-d36f2d8b4834" containerID="ddd300a28e19897caa69eddbd4544407ce3e46152749afb02a098f686a3c95db" exitCode=0 Jan 23 17:01:45 crc kubenswrapper[4718]: I0123 17:01:45.636542 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvzrf" event={"ID":"2b47454b-eab3-43e0-b594-d36f2d8b4834","Type":"ContainerDied","Data":"ddd300a28e19897caa69eddbd4544407ce3e46152749afb02a098f686a3c95db"} Jan 23 17:01:45 crc kubenswrapper[4718]: I0123 17:01:45.636808 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvzrf" event={"ID":"2b47454b-eab3-43e0-b594-d36f2d8b4834","Type":"ContainerStarted","Data":"2016e9f0c2014324ca8c6910b08843792c9abe6fcf3ed9f42843d49445c3f1cc"} Jan 23 17:01:46 crc kubenswrapper[4718]: I0123 17:01:46.651135 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvzrf" event={"ID":"2b47454b-eab3-43e0-b594-d36f2d8b4834","Type":"ContainerStarted","Data":"f0948d2a8cff11b5a91ed9b12d02e3d95a68a9d5a223a661b7739e1e8df8e03c"} Jan 23 17:01:47 crc kubenswrapper[4718]: I0123 17:01:47.691087 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vjb9l" Jan 23 17:01:47 crc kubenswrapper[4718]: I0123 17:01:47.691128 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vjb9l" Jan 23 17:01:47 crc kubenswrapper[4718]: I0123 17:01:47.747538 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vjb9l" Jan 23 17:01:48 crc kubenswrapper[4718]: I0123 17:01:48.671506 4718 generic.go:334] "Generic (PLEG): container finished" podID="680f27a4-945b-4f46-ae19-c0b05b6f3d4c" containerID="063d7de350dc0e55ed014c21b3fbd26b8ed7bbd4a108a4d2864ebdbca9bcfe08" exitCode=0 Jan 23 17:01:48 crc kubenswrapper[4718]: I0123 17:01:48.671599 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" event={"ID":"680f27a4-945b-4f46-ae19-c0b05b6f3d4c","Type":"ContainerDied","Data":"063d7de350dc0e55ed014c21b3fbd26b8ed7bbd4a108a4d2864ebdbca9bcfe08"} Jan 23 17:01:48 crc kubenswrapper[4718]: I0123 17:01:48.727292 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vjb9l" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.457240 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.531100 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjb9l"] Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.627532 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-ssh-key-openstack-edpm-ipam\") pod \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.627592 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-libvirt-secret-0\") pod \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.627656 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-inventory\") pod \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.627882 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-libvirt-combined-ca-bundle\") pod \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.627972 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf47k\" (UniqueName: \"kubernetes.io/projected/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-kube-api-access-lf47k\") pod \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\" (UID: \"680f27a4-945b-4f46-ae19-c0b05b6f3d4c\") " Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.634387 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-kube-api-access-lf47k" (OuterVolumeSpecName: "kube-api-access-lf47k") pod "680f27a4-945b-4f46-ae19-c0b05b6f3d4c" (UID: "680f27a4-945b-4f46-ae19-c0b05b6f3d4c"). InnerVolumeSpecName "kube-api-access-lf47k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.637582 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "680f27a4-945b-4f46-ae19-c0b05b6f3d4c" (UID: "680f27a4-945b-4f46-ae19-c0b05b6f3d4c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.663651 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "680f27a4-945b-4f46-ae19-c0b05b6f3d4c" (UID: "680f27a4-945b-4f46-ae19-c0b05b6f3d4c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.676147 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-inventory" (OuterVolumeSpecName: "inventory") pod "680f27a4-945b-4f46-ae19-c0b05b6f3d4c" (UID: "680f27a4-945b-4f46-ae19-c0b05b6f3d4c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.679120 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "680f27a4-945b-4f46-ae19-c0b05b6f3d4c" (UID: "680f27a4-945b-4f46-ae19-c0b05b6f3d4c"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.714490 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.714768 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv" event={"ID":"680f27a4-945b-4f46-ae19-c0b05b6f3d4c","Type":"ContainerDied","Data":"316309a76fb09b31228d87574cc035d5319454e53d07c28075975c09e7d94082"} Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.714901 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="316309a76fb09b31228d87574cc035d5319454e53d07c28075975c09e7d94082" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.728430 4718 generic.go:334] "Generic (PLEG): container finished" podID="2b47454b-eab3-43e0-b594-d36f2d8b4834" containerID="f0948d2a8cff11b5a91ed9b12d02e3d95a68a9d5a223a661b7739e1e8df8e03c" exitCode=0 Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.728508 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvzrf" event={"ID":"2b47454b-eab3-43e0-b594-d36f2d8b4834","Type":"ContainerDied","Data":"f0948d2a8cff11b5a91ed9b12d02e3d95a68a9d5a223a661b7739e1e8df8e03c"} Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.728856 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vjb9l" podUID="b8ae139e-07d1-4e3c-8340-c7f9ad40165c" containerName="registry-server" containerID="cri-o://5311770255f6989ec1e06c27779cd610355a03d85d086a90a685a1b9638953bf" gracePeriod=2 Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.733653 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.733691 4718 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.733708 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.733722 4718 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.734298 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf47k\" (UniqueName: \"kubernetes.io/projected/680f27a4-945b-4f46-ae19-c0b05b6f3d4c-kube-api-access-lf47k\") on node \"crc\" DevicePath \"\"" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.823891 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl"] Jan 23 17:01:50 crc kubenswrapper[4718]: E0123 17:01:50.839319 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="680f27a4-945b-4f46-ae19-c0b05b6f3d4c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.839359 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="680f27a4-945b-4f46-ae19-c0b05b6f3d4c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.841183 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="680f27a4-945b-4f46-ae19-c0b05b6f3d4c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.854322 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.860560 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.861113 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.861154 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.861324 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.861461 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.861331 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.861807 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.897141 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl"] Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.951402 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.951452 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.951599 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.951878 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tp27\" (UniqueName: \"kubernetes.io/projected/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-kube-api-access-8tp27\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.952030 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.952083 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.952155 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.952308 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:50 crc kubenswrapper[4718]: I0123 17:01:50.952467 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.056554 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.057202 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.057310 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.057465 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tp27\" (UniqueName: \"kubernetes.io/projected/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-kube-api-access-8tp27\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.057777 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.059778 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.060972 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.061035 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.061257 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.061416 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.062209 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.063351 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.064217 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.064892 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.065136 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.066598 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.075076 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.076473 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tp27\" (UniqueName: \"kubernetes.io/projected/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-kube-api-access-8tp27\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zq8vl\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.315472 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.786566 4718 generic.go:334] "Generic (PLEG): container finished" podID="b8ae139e-07d1-4e3c-8340-c7f9ad40165c" containerID="5311770255f6989ec1e06c27779cd610355a03d85d086a90a685a1b9638953bf" exitCode=0 Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.787076 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjb9l" event={"ID":"b8ae139e-07d1-4e3c-8340-c7f9ad40165c","Type":"ContainerDied","Data":"5311770255f6989ec1e06c27779cd610355a03d85d086a90a685a1b9638953bf"} Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.788845 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvzrf" event={"ID":"2b47454b-eab3-43e0-b594-d36f2d8b4834","Type":"ContainerStarted","Data":"f14f81879fcd81bfa442e479ec754498323c689570280c34bc1957c6f8a9810a"} Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.821995 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bvzrf" podStartSLOduration=3.237518789 podStartE2EDuration="8.821978395s" podCreationTimestamp="2026-01-23 17:01:43 +0000 UTC" firstStartedPulling="2026-01-23 17:01:45.639664285 +0000 UTC m=+2706.786906286" lastFinishedPulling="2026-01-23 17:01:51.224123901 +0000 UTC m=+2712.371365892" observedRunningTime="2026-01-23 17:01:51.816118857 +0000 UTC m=+2712.963360848" watchObservedRunningTime="2026-01-23 17:01:51.821978395 +0000 UTC m=+2712.969220386" Jan 23 17:01:51 crc kubenswrapper[4718]: I0123 17:01:51.964842 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjb9l" Jan 23 17:01:52 crc kubenswrapper[4718]: I0123 17:01:52.024952 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ae139e-07d1-4e3c-8340-c7f9ad40165c-utilities\") pod \"b8ae139e-07d1-4e3c-8340-c7f9ad40165c\" (UID: \"b8ae139e-07d1-4e3c-8340-c7f9ad40165c\") " Jan 23 17:01:52 crc kubenswrapper[4718]: I0123 17:01:52.025081 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf5hv\" (UniqueName: \"kubernetes.io/projected/b8ae139e-07d1-4e3c-8340-c7f9ad40165c-kube-api-access-bf5hv\") pod \"b8ae139e-07d1-4e3c-8340-c7f9ad40165c\" (UID: \"b8ae139e-07d1-4e3c-8340-c7f9ad40165c\") " Jan 23 17:01:52 crc kubenswrapper[4718]: I0123 17:01:52.025281 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ae139e-07d1-4e3c-8340-c7f9ad40165c-catalog-content\") pod \"b8ae139e-07d1-4e3c-8340-c7f9ad40165c\" (UID: \"b8ae139e-07d1-4e3c-8340-c7f9ad40165c\") " Jan 23 17:01:52 crc kubenswrapper[4718]: I0123 17:01:52.028274 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8ae139e-07d1-4e3c-8340-c7f9ad40165c-utilities" (OuterVolumeSpecName: "utilities") pod "b8ae139e-07d1-4e3c-8340-c7f9ad40165c" (UID: "b8ae139e-07d1-4e3c-8340-c7f9ad40165c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:01:52 crc kubenswrapper[4718]: I0123 17:01:52.034900 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ae139e-07d1-4e3c-8340-c7f9ad40165c-kube-api-access-bf5hv" (OuterVolumeSpecName: "kube-api-access-bf5hv") pod "b8ae139e-07d1-4e3c-8340-c7f9ad40165c" (UID: "b8ae139e-07d1-4e3c-8340-c7f9ad40165c"). InnerVolumeSpecName "kube-api-access-bf5hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:01:52 crc kubenswrapper[4718]: I0123 17:01:52.085288 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl"] Jan 23 17:01:52 crc kubenswrapper[4718]: I0123 17:01:52.104064 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8ae139e-07d1-4e3c-8340-c7f9ad40165c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8ae139e-07d1-4e3c-8340-c7f9ad40165c" (UID: "b8ae139e-07d1-4e3c-8340-c7f9ad40165c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:01:52 crc kubenswrapper[4718]: I0123 17:01:52.130504 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ae139e-07d1-4e3c-8340-c7f9ad40165c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:01:52 crc kubenswrapper[4718]: I0123 17:01:52.130535 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf5hv\" (UniqueName: \"kubernetes.io/projected/b8ae139e-07d1-4e3c-8340-c7f9ad40165c-kube-api-access-bf5hv\") on node \"crc\" DevicePath \"\"" Jan 23 17:01:52 crc kubenswrapper[4718]: I0123 17:01:52.130546 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ae139e-07d1-4e3c-8340-c7f9ad40165c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:01:52 crc kubenswrapper[4718]: I0123 17:01:52.807974 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjb9l" event={"ID":"b8ae139e-07d1-4e3c-8340-c7f9ad40165c","Type":"ContainerDied","Data":"ae8636fc7509a633b9a17e9e23eab61a02dd7e5046ae5ccd36fd1715d8e3afcd"} Jan 23 17:01:52 crc kubenswrapper[4718]: I0123 17:01:52.808025 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjb9l" Jan 23 17:01:52 crc kubenswrapper[4718]: I0123 17:01:52.808034 4718 scope.go:117] "RemoveContainer" containerID="5311770255f6989ec1e06c27779cd610355a03d85d086a90a685a1b9638953bf" Jan 23 17:01:52 crc kubenswrapper[4718]: I0123 17:01:52.813851 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" event={"ID":"7354170c-e5c6-4c6e-be23-d2c6bd685aa0","Type":"ContainerStarted","Data":"2ed05db6152012b74d4cdb1ca715bba0f3081825842cc3aac549fc5991f722f2"} Jan 23 17:01:52 crc kubenswrapper[4718]: I0123 17:01:52.853882 4718 scope.go:117] "RemoveContainer" containerID="0340ca17438b2f220364827989339d88f6d4821b734d8bbdc77903bdac171e48" Jan 23 17:01:52 crc kubenswrapper[4718]: I0123 17:01:52.869046 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjb9l"] Jan 23 17:01:52 crc kubenswrapper[4718]: I0123 17:01:52.880823 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjb9l"] Jan 23 17:01:52 crc kubenswrapper[4718]: I0123 17:01:52.944387 4718 scope.go:117] "RemoveContainer" containerID="cbcf151fbf8227ca1f2ca85f6360c2db9d0253cf4ad53fc8483377c7d04a25b5" Jan 23 17:01:53 crc kubenswrapper[4718]: I0123 17:01:53.142268 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 17:01:53 crc kubenswrapper[4718]: E0123 17:01:53.143600 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:01:53 crc kubenswrapper[4718]: I0123 17:01:53.158285 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8ae139e-07d1-4e3c-8340-c7f9ad40165c" path="/var/lib/kubelet/pods/b8ae139e-07d1-4e3c-8340-c7f9ad40165c/volumes" Jan 23 17:01:53 crc kubenswrapper[4718]: I0123 17:01:53.825474 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" event={"ID":"7354170c-e5c6-4c6e-be23-d2c6bd685aa0","Type":"ContainerStarted","Data":"cbe4a9dc3cf050a5f36b5c911303db5dcaf1f877c9e4c6fd2b50592546447a33"} Jan 23 17:01:53 crc kubenswrapper[4718]: I0123 17:01:53.848705 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" podStartSLOduration=3.083841049 podStartE2EDuration="3.84868406s" podCreationTimestamp="2026-01-23 17:01:50 +0000 UTC" firstStartedPulling="2026-01-23 17:01:52.114912966 +0000 UTC m=+2713.262154957" lastFinishedPulling="2026-01-23 17:01:52.879755977 +0000 UTC m=+2714.026997968" observedRunningTime="2026-01-23 17:01:53.83982924 +0000 UTC m=+2714.987071231" watchObservedRunningTime="2026-01-23 17:01:53.84868406 +0000 UTC m=+2714.995926041" Jan 23 17:01:54 crc kubenswrapper[4718]: I0123 17:01:54.048008 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bvzrf" Jan 23 17:01:54 crc kubenswrapper[4718]: I0123 17:01:54.048069 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bvzrf" Jan 23 17:01:55 crc kubenswrapper[4718]: I0123 17:01:55.108439 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bvzrf" podUID="2b47454b-eab3-43e0-b594-d36f2d8b4834" containerName="registry-server" probeResult="failure" output=< Jan 23 17:01:55 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Jan 23 17:01:55 crc kubenswrapper[4718]: > Jan 23 17:02:04 crc kubenswrapper[4718]: I0123 17:02:04.140449 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 17:02:04 crc kubenswrapper[4718]: I0123 17:02:04.149392 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bvzrf" Jan 23 17:02:04 crc kubenswrapper[4718]: I0123 17:02:04.203191 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bvzrf" Jan 23 17:02:04 crc kubenswrapper[4718]: I0123 17:02:04.396322 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bvzrf"] Jan 23 17:02:04 crc kubenswrapper[4718]: I0123 17:02:04.945180 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"e52c53231d03bd46cfc9f1633b241099548028ec5e371ea0fc89767f0c855ef1"} Jan 23 17:02:05 crc kubenswrapper[4718]: I0123 17:02:05.958049 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bvzrf" podUID="2b47454b-eab3-43e0-b594-d36f2d8b4834" containerName="registry-server" containerID="cri-o://f14f81879fcd81bfa442e479ec754498323c689570280c34bc1957c6f8a9810a" gracePeriod=2 Jan 23 17:02:06 crc kubenswrapper[4718]: I0123 17:02:06.522108 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bvzrf" Jan 23 17:02:06 crc kubenswrapper[4718]: I0123 17:02:06.716792 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgppc\" (UniqueName: \"kubernetes.io/projected/2b47454b-eab3-43e0-b594-d36f2d8b4834-kube-api-access-bgppc\") pod \"2b47454b-eab3-43e0-b594-d36f2d8b4834\" (UID: \"2b47454b-eab3-43e0-b594-d36f2d8b4834\") " Jan 23 17:02:06 crc kubenswrapper[4718]: I0123 17:02:06.716848 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b47454b-eab3-43e0-b594-d36f2d8b4834-utilities\") pod \"2b47454b-eab3-43e0-b594-d36f2d8b4834\" (UID: \"2b47454b-eab3-43e0-b594-d36f2d8b4834\") " Jan 23 17:02:06 crc kubenswrapper[4718]: I0123 17:02:06.717032 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b47454b-eab3-43e0-b594-d36f2d8b4834-catalog-content\") pod \"2b47454b-eab3-43e0-b594-d36f2d8b4834\" (UID: \"2b47454b-eab3-43e0-b594-d36f2d8b4834\") " Jan 23 17:02:06 crc kubenswrapper[4718]: I0123 17:02:06.718018 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b47454b-eab3-43e0-b594-d36f2d8b4834-utilities" (OuterVolumeSpecName: "utilities") pod "2b47454b-eab3-43e0-b594-d36f2d8b4834" (UID: "2b47454b-eab3-43e0-b594-d36f2d8b4834"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:02:06 crc kubenswrapper[4718]: I0123 17:02:06.719281 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b47454b-eab3-43e0-b594-d36f2d8b4834-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:06 crc kubenswrapper[4718]: I0123 17:02:06.730586 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b47454b-eab3-43e0-b594-d36f2d8b4834-kube-api-access-bgppc" (OuterVolumeSpecName: "kube-api-access-bgppc") pod "2b47454b-eab3-43e0-b594-d36f2d8b4834" (UID: "2b47454b-eab3-43e0-b594-d36f2d8b4834"). InnerVolumeSpecName "kube-api-access-bgppc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:02:06 crc kubenswrapper[4718]: I0123 17:02:06.821516 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgppc\" (UniqueName: \"kubernetes.io/projected/2b47454b-eab3-43e0-b594-d36f2d8b4834-kube-api-access-bgppc\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:06 crc kubenswrapper[4718]: I0123 17:02:06.846269 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b47454b-eab3-43e0-b594-d36f2d8b4834-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b47454b-eab3-43e0-b594-d36f2d8b4834" (UID: "2b47454b-eab3-43e0-b594-d36f2d8b4834"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:02:06 crc kubenswrapper[4718]: I0123 17:02:06.923528 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b47454b-eab3-43e0-b594-d36f2d8b4834-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:06 crc kubenswrapper[4718]: I0123 17:02:06.971863 4718 generic.go:334] "Generic (PLEG): container finished" podID="2b47454b-eab3-43e0-b594-d36f2d8b4834" containerID="f14f81879fcd81bfa442e479ec754498323c689570280c34bc1957c6f8a9810a" exitCode=0 Jan 23 17:02:06 crc kubenswrapper[4718]: I0123 17:02:06.971908 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvzrf" event={"ID":"2b47454b-eab3-43e0-b594-d36f2d8b4834","Type":"ContainerDied","Data":"f14f81879fcd81bfa442e479ec754498323c689570280c34bc1957c6f8a9810a"} Jan 23 17:02:06 crc kubenswrapper[4718]: I0123 17:02:06.971968 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvzrf" event={"ID":"2b47454b-eab3-43e0-b594-d36f2d8b4834","Type":"ContainerDied","Data":"2016e9f0c2014324ca8c6910b08843792c9abe6fcf3ed9f42843d49445c3f1cc"} Jan 23 17:02:06 crc kubenswrapper[4718]: I0123 17:02:06.971988 4718 scope.go:117] "RemoveContainer" containerID="f14f81879fcd81bfa442e479ec754498323c689570280c34bc1957c6f8a9810a" Jan 23 17:02:06 crc kubenswrapper[4718]: I0123 17:02:06.971930 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bvzrf" Jan 23 17:02:07 crc kubenswrapper[4718]: I0123 17:02:07.000621 4718 scope.go:117] "RemoveContainer" containerID="f0948d2a8cff11b5a91ed9b12d02e3d95a68a9d5a223a661b7739e1e8df8e03c" Jan 23 17:02:07 crc kubenswrapper[4718]: I0123 17:02:07.011646 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bvzrf"] Jan 23 17:02:07 crc kubenswrapper[4718]: I0123 17:02:07.022371 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bvzrf"] Jan 23 17:02:07 crc kubenswrapper[4718]: I0123 17:02:07.039760 4718 scope.go:117] "RemoveContainer" containerID="ddd300a28e19897caa69eddbd4544407ce3e46152749afb02a098f686a3c95db" Jan 23 17:02:07 crc kubenswrapper[4718]: I0123 17:02:07.085664 4718 scope.go:117] "RemoveContainer" containerID="f14f81879fcd81bfa442e479ec754498323c689570280c34bc1957c6f8a9810a" Jan 23 17:02:07 crc kubenswrapper[4718]: E0123 17:02:07.086054 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14f81879fcd81bfa442e479ec754498323c689570280c34bc1957c6f8a9810a\": container with ID starting with f14f81879fcd81bfa442e479ec754498323c689570280c34bc1957c6f8a9810a not found: ID does not exist" containerID="f14f81879fcd81bfa442e479ec754498323c689570280c34bc1957c6f8a9810a" Jan 23 17:02:07 crc kubenswrapper[4718]: I0123 17:02:07.086088 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14f81879fcd81bfa442e479ec754498323c689570280c34bc1957c6f8a9810a"} err="failed to get container status \"f14f81879fcd81bfa442e479ec754498323c689570280c34bc1957c6f8a9810a\": rpc error: code = NotFound desc = could not find container \"f14f81879fcd81bfa442e479ec754498323c689570280c34bc1957c6f8a9810a\": container with ID starting with f14f81879fcd81bfa442e479ec754498323c689570280c34bc1957c6f8a9810a not found: ID does not exist" Jan 23 17:02:07 crc kubenswrapper[4718]: I0123 17:02:07.086113 4718 scope.go:117] "RemoveContainer" containerID="f0948d2a8cff11b5a91ed9b12d02e3d95a68a9d5a223a661b7739e1e8df8e03c" Jan 23 17:02:07 crc kubenswrapper[4718]: E0123 17:02:07.086358 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0948d2a8cff11b5a91ed9b12d02e3d95a68a9d5a223a661b7739e1e8df8e03c\": container with ID starting with f0948d2a8cff11b5a91ed9b12d02e3d95a68a9d5a223a661b7739e1e8df8e03c not found: ID does not exist" containerID="f0948d2a8cff11b5a91ed9b12d02e3d95a68a9d5a223a661b7739e1e8df8e03c" Jan 23 17:02:07 crc kubenswrapper[4718]: I0123 17:02:07.086388 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0948d2a8cff11b5a91ed9b12d02e3d95a68a9d5a223a661b7739e1e8df8e03c"} err="failed to get container status \"f0948d2a8cff11b5a91ed9b12d02e3d95a68a9d5a223a661b7739e1e8df8e03c\": rpc error: code = NotFound desc = could not find container \"f0948d2a8cff11b5a91ed9b12d02e3d95a68a9d5a223a661b7739e1e8df8e03c\": container with ID starting with f0948d2a8cff11b5a91ed9b12d02e3d95a68a9d5a223a661b7739e1e8df8e03c not found: ID does not exist" Jan 23 17:02:07 crc kubenswrapper[4718]: I0123 17:02:07.086403 4718 scope.go:117] "RemoveContainer" containerID="ddd300a28e19897caa69eddbd4544407ce3e46152749afb02a098f686a3c95db" Jan 23 17:02:07 crc kubenswrapper[4718]: E0123 17:02:07.086606 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddd300a28e19897caa69eddbd4544407ce3e46152749afb02a098f686a3c95db\": container with ID starting with ddd300a28e19897caa69eddbd4544407ce3e46152749afb02a098f686a3c95db not found: ID does not exist" containerID="ddd300a28e19897caa69eddbd4544407ce3e46152749afb02a098f686a3c95db" Jan 23 17:02:07 crc kubenswrapper[4718]: I0123 17:02:07.086641 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd300a28e19897caa69eddbd4544407ce3e46152749afb02a098f686a3c95db"} err="failed to get container status \"ddd300a28e19897caa69eddbd4544407ce3e46152749afb02a098f686a3c95db\": rpc error: code = NotFound desc = could not find container \"ddd300a28e19897caa69eddbd4544407ce3e46152749afb02a098f686a3c95db\": container with ID starting with ddd300a28e19897caa69eddbd4544407ce3e46152749afb02a098f686a3c95db not found: ID does not exist" Jan 23 17:02:07 crc kubenswrapper[4718]: I0123 17:02:07.156380 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b47454b-eab3-43e0-b594-d36f2d8b4834" path="/var/lib/kubelet/pods/2b47454b-eab3-43e0-b594-d36f2d8b4834/volumes" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.581855 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bz4lg"] Jan 23 17:02:51 crc kubenswrapper[4718]: E0123 17:02:51.583262 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b47454b-eab3-43e0-b594-d36f2d8b4834" containerName="extract-content" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.583283 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b47454b-eab3-43e0-b594-d36f2d8b4834" containerName="extract-content" Jan 23 17:02:51 crc kubenswrapper[4718]: E0123 17:02:51.583299 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b47454b-eab3-43e0-b594-d36f2d8b4834" containerName="extract-utilities" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.583307 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b47454b-eab3-43e0-b594-d36f2d8b4834" containerName="extract-utilities" Jan 23 17:02:51 crc kubenswrapper[4718]: E0123 17:02:51.583331 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ae139e-07d1-4e3c-8340-c7f9ad40165c" containerName="extract-content" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.583339 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ae139e-07d1-4e3c-8340-c7f9ad40165c" containerName="extract-content" Jan 23 17:02:51 crc kubenswrapper[4718]: E0123 17:02:51.583364 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b47454b-eab3-43e0-b594-d36f2d8b4834" containerName="registry-server" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.583372 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b47454b-eab3-43e0-b594-d36f2d8b4834" containerName="registry-server" Jan 23 17:02:51 crc kubenswrapper[4718]: E0123 17:02:51.583389 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ae139e-07d1-4e3c-8340-c7f9ad40165c" containerName="registry-server" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.583397 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ae139e-07d1-4e3c-8340-c7f9ad40165c" containerName="registry-server" Jan 23 17:02:51 crc kubenswrapper[4718]: E0123 17:02:51.583415 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ae139e-07d1-4e3c-8340-c7f9ad40165c" containerName="extract-utilities" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.583422 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ae139e-07d1-4e3c-8340-c7f9ad40165c" containerName="extract-utilities" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.583742 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b47454b-eab3-43e0-b594-d36f2d8b4834" containerName="registry-server" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.583773 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ae139e-07d1-4e3c-8340-c7f9ad40165c" containerName="registry-server" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.586243 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz4lg" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.602846 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bz4lg"] Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.730441 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9fhn\" (UniqueName: \"kubernetes.io/projected/7f4e07e3-53cc-4954-87cf-f636344b0ddb-kube-api-access-f9fhn\") pod \"community-operators-bz4lg\" (UID: \"7f4e07e3-53cc-4954-87cf-f636344b0ddb\") " pod="openshift-marketplace/community-operators-bz4lg" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.730495 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4e07e3-53cc-4954-87cf-f636344b0ddb-utilities\") pod \"community-operators-bz4lg\" (UID: \"7f4e07e3-53cc-4954-87cf-f636344b0ddb\") " pod="openshift-marketplace/community-operators-bz4lg" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.730570 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4e07e3-53cc-4954-87cf-f636344b0ddb-catalog-content\") pod \"community-operators-bz4lg\" (UID: \"7f4e07e3-53cc-4954-87cf-f636344b0ddb\") " pod="openshift-marketplace/community-operators-bz4lg" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.833161 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9fhn\" (UniqueName: \"kubernetes.io/projected/7f4e07e3-53cc-4954-87cf-f636344b0ddb-kube-api-access-f9fhn\") pod \"community-operators-bz4lg\" (UID: \"7f4e07e3-53cc-4954-87cf-f636344b0ddb\") " pod="openshift-marketplace/community-operators-bz4lg" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.833223 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4e07e3-53cc-4954-87cf-f636344b0ddb-utilities\") pod \"community-operators-bz4lg\" (UID: \"7f4e07e3-53cc-4954-87cf-f636344b0ddb\") " pod="openshift-marketplace/community-operators-bz4lg" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.833299 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4e07e3-53cc-4954-87cf-f636344b0ddb-catalog-content\") pod \"community-operators-bz4lg\" (UID: \"7f4e07e3-53cc-4954-87cf-f636344b0ddb\") " pod="openshift-marketplace/community-operators-bz4lg" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.833926 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4e07e3-53cc-4954-87cf-f636344b0ddb-catalog-content\") pod \"community-operators-bz4lg\" (UID: \"7f4e07e3-53cc-4954-87cf-f636344b0ddb\") " pod="openshift-marketplace/community-operators-bz4lg" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.834141 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4e07e3-53cc-4954-87cf-f636344b0ddb-utilities\") pod \"community-operators-bz4lg\" (UID: \"7f4e07e3-53cc-4954-87cf-f636344b0ddb\") " pod="openshift-marketplace/community-operators-bz4lg" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.853894 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9fhn\" (UniqueName: \"kubernetes.io/projected/7f4e07e3-53cc-4954-87cf-f636344b0ddb-kube-api-access-f9fhn\") pod \"community-operators-bz4lg\" (UID: \"7f4e07e3-53cc-4954-87cf-f636344b0ddb\") " pod="openshift-marketplace/community-operators-bz4lg" Jan 23 17:02:51 crc kubenswrapper[4718]: I0123 17:02:51.915599 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz4lg" Jan 23 17:02:52 crc kubenswrapper[4718]: I0123 17:02:52.292695 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f52qw"] Jan 23 17:02:52 crc kubenswrapper[4718]: I0123 17:02:52.338955 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f52qw"] Jan 23 17:02:52 crc kubenswrapper[4718]: I0123 17:02:52.339095 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f52qw" Jan 23 17:02:52 crc kubenswrapper[4718]: I0123 17:02:52.368250 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2987528f-d7f0-4624-872d-a932e4dc3798-utilities\") pod \"certified-operators-f52qw\" (UID: \"2987528f-d7f0-4624-872d-a932e4dc3798\") " pod="openshift-marketplace/certified-operators-f52qw" Jan 23 17:02:52 crc kubenswrapper[4718]: I0123 17:02:52.368419 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2987528f-d7f0-4624-872d-a932e4dc3798-catalog-content\") pod \"certified-operators-f52qw\" (UID: \"2987528f-d7f0-4624-872d-a932e4dc3798\") " pod="openshift-marketplace/certified-operators-f52qw" Jan 23 17:02:52 crc kubenswrapper[4718]: I0123 17:02:52.368543 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrpxb\" (UniqueName: \"kubernetes.io/projected/2987528f-d7f0-4624-872d-a932e4dc3798-kube-api-access-vrpxb\") pod \"certified-operators-f52qw\" (UID: \"2987528f-d7f0-4624-872d-a932e4dc3798\") " pod="openshift-marketplace/certified-operators-f52qw" Jan 23 17:02:52 crc kubenswrapper[4718]: I0123 17:02:52.470987 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2987528f-d7f0-4624-872d-a932e4dc3798-utilities\") pod \"certified-operators-f52qw\" (UID: \"2987528f-d7f0-4624-872d-a932e4dc3798\") " pod="openshift-marketplace/certified-operators-f52qw" Jan 23 17:02:52 crc kubenswrapper[4718]: I0123 17:02:52.471115 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2987528f-d7f0-4624-872d-a932e4dc3798-catalog-content\") pod \"certified-operators-f52qw\" (UID: \"2987528f-d7f0-4624-872d-a932e4dc3798\") " pod="openshift-marketplace/certified-operators-f52qw" Jan 23 17:02:52 crc kubenswrapper[4718]: I0123 17:02:52.471199 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrpxb\" (UniqueName: \"kubernetes.io/projected/2987528f-d7f0-4624-872d-a932e4dc3798-kube-api-access-vrpxb\") pod \"certified-operators-f52qw\" (UID: \"2987528f-d7f0-4624-872d-a932e4dc3798\") " pod="openshift-marketplace/certified-operators-f52qw" Jan 23 17:02:52 crc kubenswrapper[4718]: I0123 17:02:52.471479 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2987528f-d7f0-4624-872d-a932e4dc3798-utilities\") pod \"certified-operators-f52qw\" (UID: \"2987528f-d7f0-4624-872d-a932e4dc3798\") " pod="openshift-marketplace/certified-operators-f52qw" Jan 23 17:02:52 crc kubenswrapper[4718]: I0123 17:02:52.471747 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2987528f-d7f0-4624-872d-a932e4dc3798-catalog-content\") pod \"certified-operators-f52qw\" (UID: \"2987528f-d7f0-4624-872d-a932e4dc3798\") " pod="openshift-marketplace/certified-operators-f52qw" Jan 23 17:02:52 crc kubenswrapper[4718]: I0123 17:02:52.495309 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrpxb\" (UniqueName: \"kubernetes.io/projected/2987528f-d7f0-4624-872d-a932e4dc3798-kube-api-access-vrpxb\") pod \"certified-operators-f52qw\" (UID: \"2987528f-d7f0-4624-872d-a932e4dc3798\") " pod="openshift-marketplace/certified-operators-f52qw" Jan 23 17:02:52 crc kubenswrapper[4718]: I0123 17:02:52.605601 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bz4lg"] Jan 23 17:02:52 crc kubenswrapper[4718]: I0123 17:02:52.703763 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f52qw" Jan 23 17:02:53 crc kubenswrapper[4718]: W0123 17:02:53.256576 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2987528f_d7f0_4624_872d_a932e4dc3798.slice/crio-b5420a51ed32ee9af2f6c06e9af00c3a52e76d7ee48d8be6434d8a2e02a91845 WatchSource:0}: Error finding container b5420a51ed32ee9af2f6c06e9af00c3a52e76d7ee48d8be6434d8a2e02a91845: Status 404 returned error can't find the container with id b5420a51ed32ee9af2f6c06e9af00c3a52e76d7ee48d8be6434d8a2e02a91845 Jan 23 17:02:53 crc kubenswrapper[4718]: I0123 17:02:53.262908 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f52qw"] Jan 23 17:02:53 crc kubenswrapper[4718]: I0123 17:02:53.471972 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f52qw" event={"ID":"2987528f-d7f0-4624-872d-a932e4dc3798","Type":"ContainerStarted","Data":"b5420a51ed32ee9af2f6c06e9af00c3a52e76d7ee48d8be6434d8a2e02a91845"} Jan 23 17:02:53 crc kubenswrapper[4718]: I0123 17:02:53.473811 4718 generic.go:334] "Generic (PLEG): container finished" podID="7f4e07e3-53cc-4954-87cf-f636344b0ddb" containerID="239a2b4ac0a3d58b2d305f9e9d0cdad5a51e6f0db2348f1eaad3cd8898fb66e3" exitCode=0 Jan 23 17:02:53 crc kubenswrapper[4718]: I0123 17:02:53.473847 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz4lg" event={"ID":"7f4e07e3-53cc-4954-87cf-f636344b0ddb","Type":"ContainerDied","Data":"239a2b4ac0a3d58b2d305f9e9d0cdad5a51e6f0db2348f1eaad3cd8898fb66e3"} Jan 23 17:02:53 crc kubenswrapper[4718]: I0123 17:02:53.473868 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz4lg" event={"ID":"7f4e07e3-53cc-4954-87cf-f636344b0ddb","Type":"ContainerStarted","Data":"3eb9f4f3da5f943ad36bdfe180b4ebf27fe2a155b4f64495c5f0d3188de2474b"} Jan 23 17:02:54 crc kubenswrapper[4718]: I0123 17:02:54.484350 4718 generic.go:334] "Generic (PLEG): container finished" podID="2987528f-d7f0-4624-872d-a932e4dc3798" containerID="6da398dcce135656c854f35181bae49a97dfacebc0841db21cd805952b171631" exitCode=0 Jan 23 17:02:54 crc kubenswrapper[4718]: I0123 17:02:54.484462 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f52qw" event={"ID":"2987528f-d7f0-4624-872d-a932e4dc3798","Type":"ContainerDied","Data":"6da398dcce135656c854f35181bae49a97dfacebc0841db21cd805952b171631"} Jan 23 17:02:55 crc kubenswrapper[4718]: I0123 17:02:55.497874 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz4lg" event={"ID":"7f4e07e3-53cc-4954-87cf-f636344b0ddb","Type":"ContainerStarted","Data":"9348a0ce82a1628b1b1b25877a22aa3d8d70a4c647de2beb0777578d83ceee00"} Jan 23 17:02:56 crc kubenswrapper[4718]: E0123 17:02:56.412421 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2987528f_d7f0_4624_872d_a932e4dc3798.slice/crio-ec79d352d8076bcf13bd018a24a01bfe429e611cb5712b7e8475373205db5d8c.scope\": RecentStats: unable to find data in memory cache]" Jan 23 17:02:56 crc kubenswrapper[4718]: I0123 17:02:56.510331 4718 generic.go:334] "Generic (PLEG): container finished" podID="2987528f-d7f0-4624-872d-a932e4dc3798" containerID="ec79d352d8076bcf13bd018a24a01bfe429e611cb5712b7e8475373205db5d8c" exitCode=0 Jan 23 17:02:56 crc kubenswrapper[4718]: I0123 17:02:56.510995 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f52qw" event={"ID":"2987528f-d7f0-4624-872d-a932e4dc3798","Type":"ContainerDied","Data":"ec79d352d8076bcf13bd018a24a01bfe429e611cb5712b7e8475373205db5d8c"} Jan 23 17:02:56 crc kubenswrapper[4718]: I0123 17:02:56.514748 4718 generic.go:334] "Generic (PLEG): container finished" podID="7f4e07e3-53cc-4954-87cf-f636344b0ddb" containerID="9348a0ce82a1628b1b1b25877a22aa3d8d70a4c647de2beb0777578d83ceee00" exitCode=0 Jan 23 17:02:56 crc kubenswrapper[4718]: I0123 17:02:56.514806 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz4lg" event={"ID":"7f4e07e3-53cc-4954-87cf-f636344b0ddb","Type":"ContainerDied","Data":"9348a0ce82a1628b1b1b25877a22aa3d8d70a4c647de2beb0777578d83ceee00"} Jan 23 17:02:56 crc kubenswrapper[4718]: I0123 17:02:56.514831 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz4lg" event={"ID":"7f4e07e3-53cc-4954-87cf-f636344b0ddb","Type":"ContainerStarted","Data":"d160aa41126cdb1417ac6736374841814520b9d4257142636decce5fac540e9c"} Jan 23 17:02:56 crc kubenswrapper[4718]: I0123 17:02:56.561380 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bz4lg" podStartSLOduration=3.135009775 podStartE2EDuration="5.561362841s" podCreationTimestamp="2026-01-23 17:02:51 +0000 UTC" firstStartedPulling="2026-01-23 17:02:53.475762046 +0000 UTC m=+2774.623004037" lastFinishedPulling="2026-01-23 17:02:55.902115112 +0000 UTC m=+2777.049357103" observedRunningTime="2026-01-23 17:02:56.554808023 +0000 UTC m=+2777.702050014" watchObservedRunningTime="2026-01-23 17:02:56.561362841 +0000 UTC m=+2777.708604832" Jan 23 17:02:57 crc kubenswrapper[4718]: I0123 17:02:57.527401 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f52qw" event={"ID":"2987528f-d7f0-4624-872d-a932e4dc3798","Type":"ContainerStarted","Data":"248b12b3bb7d9d759739f73499e31721c4910465d89d6139e44686aca91c80f1"} Jan 23 17:02:57 crc kubenswrapper[4718]: I0123 17:02:57.562350 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f52qw" podStartSLOduration=3.107899764 podStartE2EDuration="5.562328522s" podCreationTimestamp="2026-01-23 17:02:52 +0000 UTC" firstStartedPulling="2026-01-23 17:02:54.487292063 +0000 UTC m=+2775.634534054" lastFinishedPulling="2026-01-23 17:02:56.94172081 +0000 UTC m=+2778.088962812" observedRunningTime="2026-01-23 17:02:57.546505523 +0000 UTC m=+2778.693747514" watchObservedRunningTime="2026-01-23 17:02:57.562328522 +0000 UTC m=+2778.709570513" Jan 23 17:03:01 crc kubenswrapper[4718]: I0123 17:03:01.917611 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bz4lg" Jan 23 17:03:01 crc kubenswrapper[4718]: I0123 17:03:01.918310 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bz4lg" Jan 23 17:03:01 crc kubenswrapper[4718]: I0123 17:03:01.969716 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bz4lg" Jan 23 17:03:02 crc kubenswrapper[4718]: I0123 17:03:02.632182 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bz4lg" Jan 23 17:03:02 crc kubenswrapper[4718]: I0123 17:03:02.688875 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bz4lg"] Jan 23 17:03:02 crc kubenswrapper[4718]: I0123 17:03:02.718962 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f52qw" Jan 23 17:03:02 crc kubenswrapper[4718]: I0123 17:03:02.719025 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f52qw" Jan 23 17:03:02 crc kubenswrapper[4718]: I0123 17:03:02.780355 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f52qw" Jan 23 17:03:03 crc kubenswrapper[4718]: I0123 17:03:03.647411 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f52qw" Jan 23 17:03:04 crc kubenswrapper[4718]: I0123 17:03:04.599198 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bz4lg" podUID="7f4e07e3-53cc-4954-87cf-f636344b0ddb" containerName="registry-server" containerID="cri-o://d160aa41126cdb1417ac6736374841814520b9d4257142636decce5fac540e9c" gracePeriod=2 Jan 23 17:03:04 crc kubenswrapper[4718]: I0123 17:03:04.623445 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f52qw"] Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.145095 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz4lg" Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.316658 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4e07e3-53cc-4954-87cf-f636344b0ddb-catalog-content\") pod \"7f4e07e3-53cc-4954-87cf-f636344b0ddb\" (UID: \"7f4e07e3-53cc-4954-87cf-f636344b0ddb\") " Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.316825 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4e07e3-53cc-4954-87cf-f636344b0ddb-utilities\") pod \"7f4e07e3-53cc-4954-87cf-f636344b0ddb\" (UID: \"7f4e07e3-53cc-4954-87cf-f636344b0ddb\") " Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.317065 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9fhn\" (UniqueName: \"kubernetes.io/projected/7f4e07e3-53cc-4954-87cf-f636344b0ddb-kube-api-access-f9fhn\") pod \"7f4e07e3-53cc-4954-87cf-f636344b0ddb\" (UID: \"7f4e07e3-53cc-4954-87cf-f636344b0ddb\") " Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.317458 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f4e07e3-53cc-4954-87cf-f636344b0ddb-utilities" (OuterVolumeSpecName: "utilities") pod "7f4e07e3-53cc-4954-87cf-f636344b0ddb" (UID: "7f4e07e3-53cc-4954-87cf-f636344b0ddb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.318508 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4e07e3-53cc-4954-87cf-f636344b0ddb-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.328983 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f4e07e3-53cc-4954-87cf-f636344b0ddb-kube-api-access-f9fhn" (OuterVolumeSpecName: "kube-api-access-f9fhn") pod "7f4e07e3-53cc-4954-87cf-f636344b0ddb" (UID: "7f4e07e3-53cc-4954-87cf-f636344b0ddb"). InnerVolumeSpecName "kube-api-access-f9fhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.384307 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f4e07e3-53cc-4954-87cf-f636344b0ddb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f4e07e3-53cc-4954-87cf-f636344b0ddb" (UID: "7f4e07e3-53cc-4954-87cf-f636344b0ddb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.420719 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9fhn\" (UniqueName: \"kubernetes.io/projected/7f4e07e3-53cc-4954-87cf-f636344b0ddb-kube-api-access-f9fhn\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.420750 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4e07e3-53cc-4954-87cf-f636344b0ddb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.610102 4718 generic.go:334] "Generic (PLEG): container finished" podID="7f4e07e3-53cc-4954-87cf-f636344b0ddb" containerID="d160aa41126cdb1417ac6736374841814520b9d4257142636decce5fac540e9c" exitCode=0 Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.610191 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz4lg" event={"ID":"7f4e07e3-53cc-4954-87cf-f636344b0ddb","Type":"ContainerDied","Data":"d160aa41126cdb1417ac6736374841814520b9d4257142636decce5fac540e9c"} Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.610249 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz4lg" event={"ID":"7f4e07e3-53cc-4954-87cf-f636344b0ddb","Type":"ContainerDied","Data":"3eb9f4f3da5f943ad36bdfe180b4ebf27fe2a155b4f64495c5f0d3188de2474b"} Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.610268 4718 scope.go:117] "RemoveContainer" containerID="d160aa41126cdb1417ac6736374841814520b9d4257142636decce5fac540e9c" Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.610358 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f52qw" podUID="2987528f-d7f0-4624-872d-a932e4dc3798" containerName="registry-server" containerID="cri-o://248b12b3bb7d9d759739f73499e31721c4910465d89d6139e44686aca91c80f1" gracePeriod=2 Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.610486 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz4lg" Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.662858 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bz4lg"] Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.669767 4718 scope.go:117] "RemoveContainer" containerID="9348a0ce82a1628b1b1b25877a22aa3d8d70a4c647de2beb0777578d83ceee00" Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.672657 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bz4lg"] Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.847976 4718 scope.go:117] "RemoveContainer" containerID="239a2b4ac0a3d58b2d305f9e9d0cdad5a51e6f0db2348f1eaad3cd8898fb66e3" Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.911301 4718 scope.go:117] "RemoveContainer" containerID="d160aa41126cdb1417ac6736374841814520b9d4257142636decce5fac540e9c" Jan 23 17:03:05 crc kubenswrapper[4718]: E0123 17:03:05.912143 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d160aa41126cdb1417ac6736374841814520b9d4257142636decce5fac540e9c\": container with ID starting with d160aa41126cdb1417ac6736374841814520b9d4257142636decce5fac540e9c not found: ID does not exist" containerID="d160aa41126cdb1417ac6736374841814520b9d4257142636decce5fac540e9c" Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.912174 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d160aa41126cdb1417ac6736374841814520b9d4257142636decce5fac540e9c"} err="failed to get container status \"d160aa41126cdb1417ac6736374841814520b9d4257142636decce5fac540e9c\": rpc error: code = NotFound desc = could not find container \"d160aa41126cdb1417ac6736374841814520b9d4257142636decce5fac540e9c\": container with ID starting with d160aa41126cdb1417ac6736374841814520b9d4257142636decce5fac540e9c not found: ID does not exist" Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.912201 4718 scope.go:117] "RemoveContainer" containerID="9348a0ce82a1628b1b1b25877a22aa3d8d70a4c647de2beb0777578d83ceee00" Jan 23 17:03:05 crc kubenswrapper[4718]: E0123 17:03:05.912487 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9348a0ce82a1628b1b1b25877a22aa3d8d70a4c647de2beb0777578d83ceee00\": container with ID starting with 9348a0ce82a1628b1b1b25877a22aa3d8d70a4c647de2beb0777578d83ceee00 not found: ID does not exist" containerID="9348a0ce82a1628b1b1b25877a22aa3d8d70a4c647de2beb0777578d83ceee00" Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.912515 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9348a0ce82a1628b1b1b25877a22aa3d8d70a4c647de2beb0777578d83ceee00"} err="failed to get container status \"9348a0ce82a1628b1b1b25877a22aa3d8d70a4c647de2beb0777578d83ceee00\": rpc error: code = NotFound desc = could not find container \"9348a0ce82a1628b1b1b25877a22aa3d8d70a4c647de2beb0777578d83ceee00\": container with ID starting with 9348a0ce82a1628b1b1b25877a22aa3d8d70a4c647de2beb0777578d83ceee00 not found: ID does not exist" Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.912531 4718 scope.go:117] "RemoveContainer" containerID="239a2b4ac0a3d58b2d305f9e9d0cdad5a51e6f0db2348f1eaad3cd8898fb66e3" Jan 23 17:03:05 crc kubenswrapper[4718]: E0123 17:03:05.912807 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"239a2b4ac0a3d58b2d305f9e9d0cdad5a51e6f0db2348f1eaad3cd8898fb66e3\": container with ID starting with 239a2b4ac0a3d58b2d305f9e9d0cdad5a51e6f0db2348f1eaad3cd8898fb66e3 not found: ID does not exist" containerID="239a2b4ac0a3d58b2d305f9e9d0cdad5a51e6f0db2348f1eaad3cd8898fb66e3" Jan 23 17:03:05 crc kubenswrapper[4718]: I0123 17:03:05.912835 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"239a2b4ac0a3d58b2d305f9e9d0cdad5a51e6f0db2348f1eaad3cd8898fb66e3"} err="failed to get container status \"239a2b4ac0a3d58b2d305f9e9d0cdad5a51e6f0db2348f1eaad3cd8898fb66e3\": rpc error: code = NotFound desc = could not find container \"239a2b4ac0a3d58b2d305f9e9d0cdad5a51e6f0db2348f1eaad3cd8898fb66e3\": container with ID starting with 239a2b4ac0a3d58b2d305f9e9d0cdad5a51e6f0db2348f1eaad3cd8898fb66e3 not found: ID does not exist" Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.163527 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f52qw" Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.242682 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2987528f-d7f0-4624-872d-a932e4dc3798-utilities\") pod \"2987528f-d7f0-4624-872d-a932e4dc3798\" (UID: \"2987528f-d7f0-4624-872d-a932e4dc3798\") " Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.242789 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2987528f-d7f0-4624-872d-a932e4dc3798-catalog-content\") pod \"2987528f-d7f0-4624-872d-a932e4dc3798\" (UID: \"2987528f-d7f0-4624-872d-a932e4dc3798\") " Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.242853 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrpxb\" (UniqueName: \"kubernetes.io/projected/2987528f-d7f0-4624-872d-a932e4dc3798-kube-api-access-vrpxb\") pod \"2987528f-d7f0-4624-872d-a932e4dc3798\" (UID: \"2987528f-d7f0-4624-872d-a932e4dc3798\") " Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.243795 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2987528f-d7f0-4624-872d-a932e4dc3798-utilities" (OuterVolumeSpecName: "utilities") pod "2987528f-d7f0-4624-872d-a932e4dc3798" (UID: "2987528f-d7f0-4624-872d-a932e4dc3798"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.244510 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2987528f-d7f0-4624-872d-a932e4dc3798-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.248053 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2987528f-d7f0-4624-872d-a932e4dc3798-kube-api-access-vrpxb" (OuterVolumeSpecName: "kube-api-access-vrpxb") pod "2987528f-d7f0-4624-872d-a932e4dc3798" (UID: "2987528f-d7f0-4624-872d-a932e4dc3798"). InnerVolumeSpecName "kube-api-access-vrpxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.289421 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2987528f-d7f0-4624-872d-a932e4dc3798-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2987528f-d7f0-4624-872d-a932e4dc3798" (UID: "2987528f-d7f0-4624-872d-a932e4dc3798"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.347164 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2987528f-d7f0-4624-872d-a932e4dc3798-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.347198 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrpxb\" (UniqueName: \"kubernetes.io/projected/2987528f-d7f0-4624-872d-a932e4dc3798-kube-api-access-vrpxb\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.626106 4718 generic.go:334] "Generic (PLEG): container finished" podID="2987528f-d7f0-4624-872d-a932e4dc3798" containerID="248b12b3bb7d9d759739f73499e31721c4910465d89d6139e44686aca91c80f1" exitCode=0 Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.626181 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f52qw" event={"ID":"2987528f-d7f0-4624-872d-a932e4dc3798","Type":"ContainerDied","Data":"248b12b3bb7d9d759739f73499e31721c4910465d89d6139e44686aca91c80f1"} Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.626805 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f52qw" event={"ID":"2987528f-d7f0-4624-872d-a932e4dc3798","Type":"ContainerDied","Data":"b5420a51ed32ee9af2f6c06e9af00c3a52e76d7ee48d8be6434d8a2e02a91845"} Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.626843 4718 scope.go:117] "RemoveContainer" containerID="248b12b3bb7d9d759739f73499e31721c4910465d89d6139e44686aca91c80f1" Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.626315 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f52qw" Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.656512 4718 scope.go:117] "RemoveContainer" containerID="ec79d352d8076bcf13bd018a24a01bfe429e611cb5712b7e8475373205db5d8c" Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.694640 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f52qw"] Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.699286 4718 scope.go:117] "RemoveContainer" containerID="6da398dcce135656c854f35181bae49a97dfacebc0841db21cd805952b171631" Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.717745 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f52qw"] Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.721697 4718 scope.go:117] "RemoveContainer" containerID="248b12b3bb7d9d759739f73499e31721c4910465d89d6139e44686aca91c80f1" Jan 23 17:03:06 crc kubenswrapper[4718]: E0123 17:03:06.722222 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"248b12b3bb7d9d759739f73499e31721c4910465d89d6139e44686aca91c80f1\": container with ID starting with 248b12b3bb7d9d759739f73499e31721c4910465d89d6139e44686aca91c80f1 not found: ID does not exist" containerID="248b12b3bb7d9d759739f73499e31721c4910465d89d6139e44686aca91c80f1" Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.722276 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"248b12b3bb7d9d759739f73499e31721c4910465d89d6139e44686aca91c80f1"} err="failed to get container status \"248b12b3bb7d9d759739f73499e31721c4910465d89d6139e44686aca91c80f1\": rpc error: code = NotFound desc = could not find container \"248b12b3bb7d9d759739f73499e31721c4910465d89d6139e44686aca91c80f1\": container with ID starting with 248b12b3bb7d9d759739f73499e31721c4910465d89d6139e44686aca91c80f1 not found: ID does not exist" Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.722311 4718 scope.go:117] "RemoveContainer" containerID="ec79d352d8076bcf13bd018a24a01bfe429e611cb5712b7e8475373205db5d8c" Jan 23 17:03:06 crc kubenswrapper[4718]: E0123 17:03:06.722550 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec79d352d8076bcf13bd018a24a01bfe429e611cb5712b7e8475373205db5d8c\": container with ID starting with ec79d352d8076bcf13bd018a24a01bfe429e611cb5712b7e8475373205db5d8c not found: ID does not exist" containerID="ec79d352d8076bcf13bd018a24a01bfe429e611cb5712b7e8475373205db5d8c" Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.722592 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec79d352d8076bcf13bd018a24a01bfe429e611cb5712b7e8475373205db5d8c"} err="failed to get container status \"ec79d352d8076bcf13bd018a24a01bfe429e611cb5712b7e8475373205db5d8c\": rpc error: code = NotFound desc = could not find container \"ec79d352d8076bcf13bd018a24a01bfe429e611cb5712b7e8475373205db5d8c\": container with ID starting with ec79d352d8076bcf13bd018a24a01bfe429e611cb5712b7e8475373205db5d8c not found: ID does not exist" Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.722611 4718 scope.go:117] "RemoveContainer" containerID="6da398dcce135656c854f35181bae49a97dfacebc0841db21cd805952b171631" Jan 23 17:03:06 crc kubenswrapper[4718]: E0123 17:03:06.722874 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6da398dcce135656c854f35181bae49a97dfacebc0841db21cd805952b171631\": container with ID starting with 6da398dcce135656c854f35181bae49a97dfacebc0841db21cd805952b171631 not found: ID does not exist" containerID="6da398dcce135656c854f35181bae49a97dfacebc0841db21cd805952b171631" Jan 23 17:03:06 crc kubenswrapper[4718]: I0123 17:03:06.722923 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6da398dcce135656c854f35181bae49a97dfacebc0841db21cd805952b171631"} err="failed to get container status \"6da398dcce135656c854f35181bae49a97dfacebc0841db21cd805952b171631\": rpc error: code = NotFound desc = could not find container \"6da398dcce135656c854f35181bae49a97dfacebc0841db21cd805952b171631\": container with ID starting with 6da398dcce135656c854f35181bae49a97dfacebc0841db21cd805952b171631 not found: ID does not exist" Jan 23 17:03:07 crc kubenswrapper[4718]: I0123 17:03:07.153223 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2987528f-d7f0-4624-872d-a932e4dc3798" path="/var/lib/kubelet/pods/2987528f-d7f0-4624-872d-a932e4dc3798/volumes" Jan 23 17:03:07 crc kubenswrapper[4718]: I0123 17:03:07.154280 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f4e07e3-53cc-4954-87cf-f636344b0ddb" path="/var/lib/kubelet/pods/7f4e07e3-53cc-4954-87cf-f636344b0ddb/volumes" Jan 23 17:04:08 crc kubenswrapper[4718]: I0123 17:04:08.310358 4718 generic.go:334] "Generic (PLEG): container finished" podID="7354170c-e5c6-4c6e-be23-d2c6bd685aa0" containerID="cbe4a9dc3cf050a5f36b5c911303db5dcaf1f877c9e4c6fd2b50592546447a33" exitCode=0 Jan 23 17:04:08 crc kubenswrapper[4718]: I0123 17:04:08.310438 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" event={"ID":"7354170c-e5c6-4c6e-be23-d2c6bd685aa0","Type":"ContainerDied","Data":"cbe4a9dc3cf050a5f36b5c911303db5dcaf1f877c9e4c6fd2b50592546447a33"} Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.787138 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.906090 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-ssh-key-openstack-edpm-ipam\") pod \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.906141 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-cell1-compute-config-0\") pod \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.906188 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-extra-config-0\") pod \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.906291 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-inventory\") pod \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.906319 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-cell1-compute-config-1\") pod \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.906412 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-migration-ssh-key-1\") pod \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.906437 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tp27\" (UniqueName: \"kubernetes.io/projected/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-kube-api-access-8tp27\") pod \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.906495 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-combined-ca-bundle\") pod \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.906538 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-migration-ssh-key-0\") pod \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\" (UID: \"7354170c-e5c6-4c6e-be23-d2c6bd685aa0\") " Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.913175 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "7354170c-e5c6-4c6e-be23-d2c6bd685aa0" (UID: "7354170c-e5c6-4c6e-be23-d2c6bd685aa0"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.913406 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-kube-api-access-8tp27" (OuterVolumeSpecName: "kube-api-access-8tp27") pod "7354170c-e5c6-4c6e-be23-d2c6bd685aa0" (UID: "7354170c-e5c6-4c6e-be23-d2c6bd685aa0"). InnerVolumeSpecName "kube-api-access-8tp27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.948372 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "7354170c-e5c6-4c6e-be23-d2c6bd685aa0" (UID: "7354170c-e5c6-4c6e-be23-d2c6bd685aa0"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.953978 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "7354170c-e5c6-4c6e-be23-d2c6bd685aa0" (UID: "7354170c-e5c6-4c6e-be23-d2c6bd685aa0"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.956258 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-inventory" (OuterVolumeSpecName: "inventory") pod "7354170c-e5c6-4c6e-be23-d2c6bd685aa0" (UID: "7354170c-e5c6-4c6e-be23-d2c6bd685aa0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.956316 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "7354170c-e5c6-4c6e-be23-d2c6bd685aa0" (UID: "7354170c-e5c6-4c6e-be23-d2c6bd685aa0"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.956680 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "7354170c-e5c6-4c6e-be23-d2c6bd685aa0" (UID: "7354170c-e5c6-4c6e-be23-d2c6bd685aa0"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.958196 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7354170c-e5c6-4c6e-be23-d2c6bd685aa0" (UID: "7354170c-e5c6-4c6e-be23-d2c6bd685aa0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:04:09 crc kubenswrapper[4718]: I0123 17:04:09.967888 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "7354170c-e5c6-4c6e-be23-d2c6bd685aa0" (UID: "7354170c-e5c6-4c6e-be23-d2c6bd685aa0"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.008714 4718 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.008994 4718 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.009004 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.009013 4718 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.009021 4718 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.009030 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.009039 4718 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.009049 4718 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.009059 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tp27\" (UniqueName: \"kubernetes.io/projected/7354170c-e5c6-4c6e-be23-d2c6bd685aa0-kube-api-access-8tp27\") on node \"crc\" DevicePath \"\"" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.335930 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.335983 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zq8vl" event={"ID":"7354170c-e5c6-4c6e-be23-d2c6bd685aa0","Type":"ContainerDied","Data":"2ed05db6152012b74d4cdb1ca715bba0f3081825842cc3aac549fc5991f722f2"} Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.336021 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ed05db6152012b74d4cdb1ca715bba0f3081825842cc3aac549fc5991f722f2" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.441981 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77"] Jan 23 17:04:10 crc kubenswrapper[4718]: E0123 17:04:10.442444 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2987528f-d7f0-4624-872d-a932e4dc3798" containerName="extract-content" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.442463 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2987528f-d7f0-4624-872d-a932e4dc3798" containerName="extract-content" Jan 23 17:04:10 crc kubenswrapper[4718]: E0123 17:04:10.442485 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2987528f-d7f0-4624-872d-a932e4dc3798" containerName="registry-server" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.442492 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2987528f-d7f0-4624-872d-a932e4dc3798" containerName="registry-server" Jan 23 17:04:10 crc kubenswrapper[4718]: E0123 17:04:10.442504 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4e07e3-53cc-4954-87cf-f636344b0ddb" containerName="extract-utilities" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.442512 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4e07e3-53cc-4954-87cf-f636344b0ddb" containerName="extract-utilities" Jan 23 17:04:10 crc kubenswrapper[4718]: E0123 17:04:10.442534 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4e07e3-53cc-4954-87cf-f636344b0ddb" containerName="registry-server" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.442540 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4e07e3-53cc-4954-87cf-f636344b0ddb" containerName="registry-server" Jan 23 17:04:10 crc kubenswrapper[4718]: E0123 17:04:10.442559 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2987528f-d7f0-4624-872d-a932e4dc3798" containerName="extract-utilities" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.442566 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2987528f-d7f0-4624-872d-a932e4dc3798" containerName="extract-utilities" Jan 23 17:04:10 crc kubenswrapper[4718]: E0123 17:04:10.442578 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7354170c-e5c6-4c6e-be23-d2c6bd685aa0" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.442584 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7354170c-e5c6-4c6e-be23-d2c6bd685aa0" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 17:04:10 crc kubenswrapper[4718]: E0123 17:04:10.442603 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4e07e3-53cc-4954-87cf-f636344b0ddb" containerName="extract-content" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.442609 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4e07e3-53cc-4954-87cf-f636344b0ddb" containerName="extract-content" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.442856 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4e07e3-53cc-4954-87cf-f636344b0ddb" containerName="registry-server" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.442875 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7354170c-e5c6-4c6e-be23-d2c6bd685aa0" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.442888 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2987528f-d7f0-4624-872d-a932e4dc3798" containerName="registry-server" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.444300 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.447175 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.447484 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.447738 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.447868 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.448002 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.466249 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77"] Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.623922 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.623996 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.624041 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpmlt\" (UniqueName: \"kubernetes.io/projected/708430f1-d1c7-46ef-9e2c-9077a85c95fb-kube-api-access-wpmlt\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.624213 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.624518 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.624799 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.625176 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.728943 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.729047 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.729158 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.729426 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.729456 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.729481 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpmlt\" (UniqueName: \"kubernetes.io/projected/708430f1-d1c7-46ef-9e2c-9077a85c95fb-kube-api-access-wpmlt\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.729532 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.734824 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.735356 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.735661 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.736286 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.736944 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.740981 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.752879 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpmlt\" (UniqueName: \"kubernetes.io/projected/708430f1-d1c7-46ef-9e2c-9077a85c95fb-kube-api-access-wpmlt\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5gq77\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:10 crc kubenswrapper[4718]: I0123 17:04:10.766861 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:04:11 crc kubenswrapper[4718]: I0123 17:04:11.499056 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77"] Jan 23 17:04:12 crc kubenswrapper[4718]: I0123 17:04:12.378037 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" event={"ID":"708430f1-d1c7-46ef-9e2c-9077a85c95fb","Type":"ContainerStarted","Data":"13abf2aa7afb2128838ba6a78eacfb55f459b8e7cded1986a9348a36a5b11889"} Jan 23 17:04:12 crc kubenswrapper[4718]: I0123 17:04:12.378386 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" event={"ID":"708430f1-d1c7-46ef-9e2c-9077a85c95fb","Type":"ContainerStarted","Data":"76f1675f228ca50b78c59f25bc26d48509bd340137e9544ef9d1eb5266d38687"} Jan 23 17:04:12 crc kubenswrapper[4718]: I0123 17:04:12.403524 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" podStartSLOduration=1.977748273 podStartE2EDuration="2.403505079s" podCreationTimestamp="2026-01-23 17:04:10 +0000 UTC" firstStartedPulling="2026-01-23 17:04:11.498537707 +0000 UTC m=+2852.645779698" lastFinishedPulling="2026-01-23 17:04:11.924294513 +0000 UTC m=+2853.071536504" observedRunningTime="2026-01-23 17:04:12.39440519 +0000 UTC m=+2853.541647181" watchObservedRunningTime="2026-01-23 17:04:12.403505079 +0000 UTC m=+2853.550747070" Jan 23 17:04:28 crc kubenswrapper[4718]: I0123 17:04:28.875807 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:04:28 crc kubenswrapper[4718]: I0123 17:04:28.876480 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:04:58 crc kubenswrapper[4718]: I0123 17:04:58.881799 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:04:58 crc kubenswrapper[4718]: I0123 17:04:58.882432 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:05:28 crc kubenswrapper[4718]: I0123 17:05:28.876337 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:05:28 crc kubenswrapper[4718]: I0123 17:05:28.876845 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:05:28 crc kubenswrapper[4718]: I0123 17:05:28.876890 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 17:05:28 crc kubenswrapper[4718]: I0123 17:05:28.877842 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e52c53231d03bd46cfc9f1633b241099548028ec5e371ea0fc89767f0c855ef1"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:05:28 crc kubenswrapper[4718]: I0123 17:05:28.877900 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://e52c53231d03bd46cfc9f1633b241099548028ec5e371ea0fc89767f0c855ef1" gracePeriod=600 Jan 23 17:05:29 crc kubenswrapper[4718]: I0123 17:05:29.120012 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="e52c53231d03bd46cfc9f1633b241099548028ec5e371ea0fc89767f0c855ef1" exitCode=0 Jan 23 17:05:29 crc kubenswrapper[4718]: I0123 17:05:29.120078 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"e52c53231d03bd46cfc9f1633b241099548028ec5e371ea0fc89767f0c855ef1"} Jan 23 17:05:29 crc kubenswrapper[4718]: I0123 17:05:29.120339 4718 scope.go:117] "RemoveContainer" containerID="a31bdbd6bd5efd937694da86eb4e96819168b600e24aa9571bebf2efaf139512" Jan 23 17:05:30 crc kubenswrapper[4718]: I0123 17:05:30.164545 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b"} Jan 23 17:06:39 crc kubenswrapper[4718]: I0123 17:06:39.915160 4718 generic.go:334] "Generic (PLEG): container finished" podID="708430f1-d1c7-46ef-9e2c-9077a85c95fb" containerID="13abf2aa7afb2128838ba6a78eacfb55f459b8e7cded1986a9348a36a5b11889" exitCode=0 Jan 23 17:06:39 crc kubenswrapper[4718]: I0123 17:06:39.915235 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" event={"ID":"708430f1-d1c7-46ef-9e2c-9077a85c95fb","Type":"ContainerDied","Data":"13abf2aa7afb2128838ba6a78eacfb55f459b8e7cded1986a9348a36a5b11889"} Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.450885 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.550411 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-telemetry-combined-ca-bundle\") pod \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.550507 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ssh-key-openstack-edpm-ipam\") pod \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.550542 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpmlt\" (UniqueName: \"kubernetes.io/projected/708430f1-d1c7-46ef-9e2c-9077a85c95fb-kube-api-access-wpmlt\") pod \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.550564 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ceilometer-compute-config-data-0\") pod \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.550589 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-inventory\") pod \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.550677 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ceilometer-compute-config-data-1\") pod \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.550803 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ceilometer-compute-config-data-2\") pod \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\" (UID: \"708430f1-d1c7-46ef-9e2c-9077a85c95fb\") " Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.557538 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "708430f1-d1c7-46ef-9e2c-9077a85c95fb" (UID: "708430f1-d1c7-46ef-9e2c-9077a85c95fb"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.557455 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/708430f1-d1c7-46ef-9e2c-9077a85c95fb-kube-api-access-wpmlt" (OuterVolumeSpecName: "kube-api-access-wpmlt") pod "708430f1-d1c7-46ef-9e2c-9077a85c95fb" (UID: "708430f1-d1c7-46ef-9e2c-9077a85c95fb"). InnerVolumeSpecName "kube-api-access-wpmlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.585178 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "708430f1-d1c7-46ef-9e2c-9077a85c95fb" (UID: "708430f1-d1c7-46ef-9e2c-9077a85c95fb"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.590371 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "708430f1-d1c7-46ef-9e2c-9077a85c95fb" (UID: "708430f1-d1c7-46ef-9e2c-9077a85c95fb"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.599490 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "708430f1-d1c7-46ef-9e2c-9077a85c95fb" (UID: "708430f1-d1c7-46ef-9e2c-9077a85c95fb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.599604 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "708430f1-d1c7-46ef-9e2c-9077a85c95fb" (UID: "708430f1-d1c7-46ef-9e2c-9077a85c95fb"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.601068 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-inventory" (OuterVolumeSpecName: "inventory") pod "708430f1-d1c7-46ef-9e2c-9077a85c95fb" (UID: "708430f1-d1c7-46ef-9e2c-9077a85c95fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.654697 4718 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.654737 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.654746 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpmlt\" (UniqueName: \"kubernetes.io/projected/708430f1-d1c7-46ef-9e2c-9077a85c95fb-kube-api-access-wpmlt\") on node \"crc\" DevicePath \"\"" Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.654755 4718 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.654767 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.654781 4718 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.654791 4718 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/708430f1-d1c7-46ef-9e2c-9077a85c95fb-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.936772 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" event={"ID":"708430f1-d1c7-46ef-9e2c-9077a85c95fb","Type":"ContainerDied","Data":"76f1675f228ca50b78c59f25bc26d48509bd340137e9544ef9d1eb5266d38687"} Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.936816 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76f1675f228ca50b78c59f25bc26d48509bd340137e9544ef9d1eb5266d38687" Jan 23 17:06:41 crc kubenswrapper[4718]: I0123 17:06:41.936978 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5gq77" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.066745 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm"] Jan 23 17:06:42 crc kubenswrapper[4718]: E0123 17:06:42.067584 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708430f1-d1c7-46ef-9e2c-9077a85c95fb" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.067602 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="708430f1-d1c7-46ef-9e2c-9077a85c95fb" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.067946 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="708430f1-d1c7-46ef-9e2c-9077a85c95fb" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.068771 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.078112 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.078316 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.078479 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.078649 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.078834 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.100642 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm"] Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.166359 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.166416 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxmhn\" (UniqueName: \"kubernetes.io/projected/7775dfb4-42b6-411d-8dc1-efe8daad5960-kube-api-access-mxmhn\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.166447 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.166469 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.166507 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.166559 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.166623 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.268427 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.268473 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxmhn\" (UniqueName: \"kubernetes.io/projected/7775dfb4-42b6-411d-8dc1-efe8daad5960-kube-api-access-mxmhn\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.268505 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.268526 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.268590 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.270290 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.270575 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.272466 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.274056 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.274282 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.274581 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.276167 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.281169 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.285311 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxmhn\" (UniqueName: \"kubernetes.io/projected/7775dfb4-42b6-411d-8dc1-efe8daad5960-kube-api-access-mxmhn\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.394585 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.902314 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm"] Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.909536 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:06:42 crc kubenswrapper[4718]: I0123 17:06:42.946891 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" event={"ID":"7775dfb4-42b6-411d-8dc1-efe8daad5960","Type":"ContainerStarted","Data":"893ce6ea7db75bad3770535dfa683eb56a527c5b427ec4279d31d400066ec872"} Jan 23 17:06:43 crc kubenswrapper[4718]: I0123 17:06:43.959606 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" event={"ID":"7775dfb4-42b6-411d-8dc1-efe8daad5960","Type":"ContainerStarted","Data":"5faa133ae8506408cdb1d8b69771b4bd800bb729d0e3342ba914d1ce522d762f"} Jan 23 17:06:43 crc kubenswrapper[4718]: I0123 17:06:43.979191 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" podStartSLOduration=1.417992086 podStartE2EDuration="1.97916965s" podCreationTimestamp="2026-01-23 17:06:42 +0000 UTC" firstStartedPulling="2026-01-23 17:06:42.909340187 +0000 UTC m=+3004.056582178" lastFinishedPulling="2026-01-23 17:06:43.470517751 +0000 UTC m=+3004.617759742" observedRunningTime="2026-01-23 17:06:43.977398462 +0000 UTC m=+3005.124640513" watchObservedRunningTime="2026-01-23 17:06:43.97916965 +0000 UTC m=+3005.126411641" Jan 23 17:07:58 crc kubenswrapper[4718]: I0123 17:07:58.875405 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:07:58 crc kubenswrapper[4718]: I0123 17:07:58.876007 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:08:28 crc kubenswrapper[4718]: I0123 17:08:28.876216 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:08:28 crc kubenswrapper[4718]: I0123 17:08:28.877408 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:08:40 crc kubenswrapper[4718]: I0123 17:08:40.266800 4718 generic.go:334] "Generic (PLEG): container finished" podID="7775dfb4-42b6-411d-8dc1-efe8daad5960" containerID="5faa133ae8506408cdb1d8b69771b4bd800bb729d0e3342ba914d1ce522d762f" exitCode=0 Jan 23 17:08:40 crc kubenswrapper[4718]: I0123 17:08:40.266861 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" event={"ID":"7775dfb4-42b6-411d-8dc1-efe8daad5960","Type":"ContainerDied","Data":"5faa133ae8506408cdb1d8b69771b4bd800bb729d0e3342ba914d1ce522d762f"} Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.749246 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.781421 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-telemetry-power-monitoring-combined-ca-bundle\") pod \"7775dfb4-42b6-411d-8dc1-efe8daad5960\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.781592 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-inventory\") pod \"7775dfb4-42b6-411d-8dc1-efe8daad5960\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.781625 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ceilometer-ipmi-config-data-2\") pod \"7775dfb4-42b6-411d-8dc1-efe8daad5960\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.782519 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ceilometer-ipmi-config-data-0\") pod \"7775dfb4-42b6-411d-8dc1-efe8daad5960\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.782694 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxmhn\" (UniqueName: \"kubernetes.io/projected/7775dfb4-42b6-411d-8dc1-efe8daad5960-kube-api-access-mxmhn\") pod \"7775dfb4-42b6-411d-8dc1-efe8daad5960\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.782729 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ceilometer-ipmi-config-data-1\") pod \"7775dfb4-42b6-411d-8dc1-efe8daad5960\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.782913 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ssh-key-openstack-edpm-ipam\") pod \"7775dfb4-42b6-411d-8dc1-efe8daad5960\" (UID: \"7775dfb4-42b6-411d-8dc1-efe8daad5960\") " Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.787957 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7775dfb4-42b6-411d-8dc1-efe8daad5960-kube-api-access-mxmhn" (OuterVolumeSpecName: "kube-api-access-mxmhn") pod "7775dfb4-42b6-411d-8dc1-efe8daad5960" (UID: "7775dfb4-42b6-411d-8dc1-efe8daad5960"). InnerVolumeSpecName "kube-api-access-mxmhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.818310 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "7775dfb4-42b6-411d-8dc1-efe8daad5960" (UID: "7775dfb4-42b6-411d-8dc1-efe8daad5960"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.826621 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-inventory" (OuterVolumeSpecName: "inventory") pod "7775dfb4-42b6-411d-8dc1-efe8daad5960" (UID: "7775dfb4-42b6-411d-8dc1-efe8daad5960"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.833818 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "7775dfb4-42b6-411d-8dc1-efe8daad5960" (UID: "7775dfb4-42b6-411d-8dc1-efe8daad5960"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.855559 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "7775dfb4-42b6-411d-8dc1-efe8daad5960" (UID: "7775dfb4-42b6-411d-8dc1-efe8daad5960"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.857264 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "7775dfb4-42b6-411d-8dc1-efe8daad5960" (UID: "7775dfb4-42b6-411d-8dc1-efe8daad5960"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.862767 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7775dfb4-42b6-411d-8dc1-efe8daad5960" (UID: "7775dfb4-42b6-411d-8dc1-efe8daad5960"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.886699 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxmhn\" (UniqueName: \"kubernetes.io/projected/7775dfb4-42b6-411d-8dc1-efe8daad5960-kube-api-access-mxmhn\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.886744 4718 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.886762 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.886776 4718 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.886791 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.886811 4718 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:41 crc kubenswrapper[4718]: I0123 17:08:41.886824 4718 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7775dfb4-42b6-411d-8dc1-efe8daad5960-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.292595 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" event={"ID":"7775dfb4-42b6-411d-8dc1-efe8daad5960","Type":"ContainerDied","Data":"893ce6ea7db75bad3770535dfa683eb56a527c5b427ec4279d31d400066ec872"} Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.293069 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="893ce6ea7db75bad3770535dfa683eb56a527c5b427ec4279d31d400066ec872" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.292923 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.412652 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz"] Jan 23 17:08:42 crc kubenswrapper[4718]: E0123 17:08:42.413767 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7775dfb4-42b6-411d-8dc1-efe8daad5960" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.413799 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7775dfb4-42b6-411d-8dc1-efe8daad5960" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.414190 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7775dfb4-42b6-411d-8dc1-efe8daad5960" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.415721 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.429778 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.429931 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.430175 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.430321 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.430528 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8ftz" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.433022 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz"] Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.508049 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pbkhz\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.508099 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pbkhz\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.508402 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fdf7\" (UniqueName: \"kubernetes.io/projected/86526a30-7eef-4621-944a-cab9bd64903b-kube-api-access-8fdf7\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pbkhz\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.508533 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pbkhz\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.509127 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pbkhz\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.612240 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pbkhz\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.612382 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pbkhz\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.612417 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pbkhz\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.612508 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fdf7\" (UniqueName: \"kubernetes.io/projected/86526a30-7eef-4621-944a-cab9bd64903b-kube-api-access-8fdf7\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pbkhz\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.612538 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pbkhz\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.616258 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pbkhz\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.616591 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pbkhz\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.616775 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pbkhz\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.617856 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pbkhz\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.629512 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fdf7\" (UniqueName: \"kubernetes.io/projected/86526a30-7eef-4621-944a-cab9bd64903b-kube-api-access-8fdf7\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pbkhz\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:08:42 crc kubenswrapper[4718]: I0123 17:08:42.750774 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:08:43 crc kubenswrapper[4718]: I0123 17:08:43.323420 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz"] Jan 23 17:08:44 crc kubenswrapper[4718]: I0123 17:08:44.320347 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" event={"ID":"86526a30-7eef-4621-944a-cab9bd64903b","Type":"ContainerStarted","Data":"9869c6f059a45d2cc138ee5d1ed697845a7547b46fbec43c19a8894fd116ce7f"} Jan 23 17:08:44 crc kubenswrapper[4718]: I0123 17:08:44.320656 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" event={"ID":"86526a30-7eef-4621-944a-cab9bd64903b","Type":"ContainerStarted","Data":"ef380e934d74e8918b3528f0d66e923445b6823b3ea7bf062eb6103859eab7bd"} Jan 23 17:08:44 crc kubenswrapper[4718]: I0123 17:08:44.344674 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" podStartSLOduration=1.889237024 podStartE2EDuration="2.34465029s" podCreationTimestamp="2026-01-23 17:08:42 +0000 UTC" firstStartedPulling="2026-01-23 17:08:43.323015843 +0000 UTC m=+3124.470257834" lastFinishedPulling="2026-01-23 17:08:43.778429109 +0000 UTC m=+3124.925671100" observedRunningTime="2026-01-23 17:08:44.335739507 +0000 UTC m=+3125.482981518" watchObservedRunningTime="2026-01-23 17:08:44.34465029 +0000 UTC m=+3125.491892291" Jan 23 17:08:58 crc kubenswrapper[4718]: I0123 17:08:58.475231 4718 generic.go:334] "Generic (PLEG): container finished" podID="86526a30-7eef-4621-944a-cab9bd64903b" containerID="9869c6f059a45d2cc138ee5d1ed697845a7547b46fbec43c19a8894fd116ce7f" exitCode=0 Jan 23 17:08:58 crc kubenswrapper[4718]: I0123 17:08:58.475337 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" event={"ID":"86526a30-7eef-4621-944a-cab9bd64903b","Type":"ContainerDied","Data":"9869c6f059a45d2cc138ee5d1ed697845a7547b46fbec43c19a8894fd116ce7f"} Jan 23 17:08:58 crc kubenswrapper[4718]: I0123 17:08:58.876134 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:08:58 crc kubenswrapper[4718]: I0123 17:08:58.876448 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:08:58 crc kubenswrapper[4718]: I0123 17:08:58.876492 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 17:08:58 crc kubenswrapper[4718]: I0123 17:08:58.877502 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:08:58 crc kubenswrapper[4718]: I0123 17:08:58.877566 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" gracePeriod=600 Jan 23 17:08:59 crc kubenswrapper[4718]: E0123 17:08:59.005521 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:08:59 crc kubenswrapper[4718]: I0123 17:08:59.487288 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" exitCode=0 Jan 23 17:08:59 crc kubenswrapper[4718]: I0123 17:08:59.487370 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b"} Jan 23 17:08:59 crc kubenswrapper[4718]: I0123 17:08:59.487827 4718 scope.go:117] "RemoveContainer" containerID="e52c53231d03bd46cfc9f1633b241099548028ec5e371ea0fc89767f0c855ef1" Jan 23 17:08:59 crc kubenswrapper[4718]: I0123 17:08:59.488667 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:08:59 crc kubenswrapper[4718]: E0123 17:08:59.488996 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:08:59 crc kubenswrapper[4718]: I0123 17:08:59.995607 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:09:00 crc kubenswrapper[4718]: I0123 17:09:00.038438 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-logging-compute-config-data-0\") pod \"86526a30-7eef-4621-944a-cab9bd64903b\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " Jan 23 17:09:00 crc kubenswrapper[4718]: I0123 17:09:00.038561 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-inventory\") pod \"86526a30-7eef-4621-944a-cab9bd64903b\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " Jan 23 17:09:00 crc kubenswrapper[4718]: I0123 17:09:00.038744 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fdf7\" (UniqueName: \"kubernetes.io/projected/86526a30-7eef-4621-944a-cab9bd64903b-kube-api-access-8fdf7\") pod \"86526a30-7eef-4621-944a-cab9bd64903b\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " Jan 23 17:09:00 crc kubenswrapper[4718]: I0123 17:09:00.038820 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-logging-compute-config-data-1\") pod \"86526a30-7eef-4621-944a-cab9bd64903b\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " Jan 23 17:09:00 crc kubenswrapper[4718]: I0123 17:09:00.038890 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-ssh-key-openstack-edpm-ipam\") pod \"86526a30-7eef-4621-944a-cab9bd64903b\" (UID: \"86526a30-7eef-4621-944a-cab9bd64903b\") " Jan 23 17:09:00 crc kubenswrapper[4718]: I0123 17:09:00.060280 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86526a30-7eef-4621-944a-cab9bd64903b-kube-api-access-8fdf7" (OuterVolumeSpecName: "kube-api-access-8fdf7") pod "86526a30-7eef-4621-944a-cab9bd64903b" (UID: "86526a30-7eef-4621-944a-cab9bd64903b"). InnerVolumeSpecName "kube-api-access-8fdf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:00 crc kubenswrapper[4718]: I0123 17:09:00.073054 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-inventory" (OuterVolumeSpecName: "inventory") pod "86526a30-7eef-4621-944a-cab9bd64903b" (UID: "86526a30-7eef-4621-944a-cab9bd64903b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:00 crc kubenswrapper[4718]: I0123 17:09:00.079950 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "86526a30-7eef-4621-944a-cab9bd64903b" (UID: "86526a30-7eef-4621-944a-cab9bd64903b"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:00 crc kubenswrapper[4718]: I0123 17:09:00.085838 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "86526a30-7eef-4621-944a-cab9bd64903b" (UID: "86526a30-7eef-4621-944a-cab9bd64903b"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:00 crc kubenswrapper[4718]: I0123 17:09:00.086205 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "86526a30-7eef-4621-944a-cab9bd64903b" (UID: "86526a30-7eef-4621-944a-cab9bd64903b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:00 crc kubenswrapper[4718]: I0123 17:09:00.141589 4718 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:00 crc kubenswrapper[4718]: I0123 17:09:00.141644 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:00 crc kubenswrapper[4718]: I0123 17:09:00.141659 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fdf7\" (UniqueName: \"kubernetes.io/projected/86526a30-7eef-4621-944a-cab9bd64903b-kube-api-access-8fdf7\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:00 crc kubenswrapper[4718]: I0123 17:09:00.141674 4718 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:00 crc kubenswrapper[4718]: I0123 17:09:00.141685 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86526a30-7eef-4621-944a-cab9bd64903b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:00 crc kubenswrapper[4718]: I0123 17:09:00.500335 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" Jan 23 17:09:00 crc kubenswrapper[4718]: I0123 17:09:00.500333 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pbkhz" event={"ID":"86526a30-7eef-4621-944a-cab9bd64903b","Type":"ContainerDied","Data":"ef380e934d74e8918b3528f0d66e923445b6823b3ea7bf062eb6103859eab7bd"} Jan 23 17:09:00 crc kubenswrapper[4718]: I0123 17:09:00.502416 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef380e934d74e8918b3528f0d66e923445b6823b3ea7bf062eb6103859eab7bd" Jan 23 17:09:14 crc kubenswrapper[4718]: I0123 17:09:14.141524 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:09:14 crc kubenswrapper[4718]: E0123 17:09:14.142542 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:09:28 crc kubenswrapper[4718]: I0123 17:09:28.140847 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:09:28 crc kubenswrapper[4718]: E0123 17:09:28.141788 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:09:43 crc kubenswrapper[4718]: I0123 17:09:43.141022 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:09:43 crc kubenswrapper[4718]: E0123 17:09:43.141755 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:09:58 crc kubenswrapper[4718]: I0123 17:09:58.141568 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:09:58 crc kubenswrapper[4718]: E0123 17:09:58.142360 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:10:10 crc kubenswrapper[4718]: I0123 17:10:10.140311 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:10:10 crc kubenswrapper[4718]: E0123 17:10:10.141401 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:10:25 crc kubenswrapper[4718]: I0123 17:10:25.141503 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:10:25 crc kubenswrapper[4718]: E0123 17:10:25.143369 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:10:38 crc kubenswrapper[4718]: I0123 17:10:38.140573 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:10:38 crc kubenswrapper[4718]: E0123 17:10:38.141349 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:10:49 crc kubenswrapper[4718]: I0123 17:10:49.148252 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:10:49 crc kubenswrapper[4718]: E0123 17:10:49.149068 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:11:03 crc kubenswrapper[4718]: I0123 17:11:03.143484 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:11:03 crc kubenswrapper[4718]: E0123 17:11:03.144343 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:11:16 crc kubenswrapper[4718]: I0123 17:11:16.140579 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:11:16 crc kubenswrapper[4718]: E0123 17:11:16.143671 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:11:28 crc kubenswrapper[4718]: I0123 17:11:28.141470 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:11:28 crc kubenswrapper[4718]: E0123 17:11:28.142176 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:11:39 crc kubenswrapper[4718]: I0123 17:11:39.143037 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:11:39 crc kubenswrapper[4718]: E0123 17:11:39.164510 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:11:51 crc kubenswrapper[4718]: I0123 17:11:51.140168 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:11:51 crc kubenswrapper[4718]: E0123 17:11:51.140919 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:12:03 crc kubenswrapper[4718]: I0123 17:12:03.140950 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:12:03 crc kubenswrapper[4718]: E0123 17:12:03.141877 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:12:04 crc kubenswrapper[4718]: I0123 17:12:04.090545 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lkr79"] Jan 23 17:12:04 crc kubenswrapper[4718]: E0123 17:12:04.091123 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86526a30-7eef-4621-944a-cab9bd64903b" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 23 17:12:04 crc kubenswrapper[4718]: I0123 17:12:04.091140 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="86526a30-7eef-4621-944a-cab9bd64903b" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 23 17:12:04 crc kubenswrapper[4718]: I0123 17:12:04.091365 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="86526a30-7eef-4621-944a-cab9bd64903b" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 23 17:12:04 crc kubenswrapper[4718]: I0123 17:12:04.093281 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkr79" Jan 23 17:12:04 crc kubenswrapper[4718]: I0123 17:12:04.097418 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c34fd8f-70a3-455f-8281-c116ac9bab11-utilities\") pod \"redhat-operators-lkr79\" (UID: \"5c34fd8f-70a3-455f-8281-c116ac9bab11\") " pod="openshift-marketplace/redhat-operators-lkr79" Jan 23 17:12:04 crc kubenswrapper[4718]: I0123 17:12:04.097514 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c34fd8f-70a3-455f-8281-c116ac9bab11-catalog-content\") pod \"redhat-operators-lkr79\" (UID: \"5c34fd8f-70a3-455f-8281-c116ac9bab11\") " pod="openshift-marketplace/redhat-operators-lkr79" Jan 23 17:12:04 crc kubenswrapper[4718]: I0123 17:12:04.097537 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrfht\" (UniqueName: \"kubernetes.io/projected/5c34fd8f-70a3-455f-8281-c116ac9bab11-kube-api-access-xrfht\") pod \"redhat-operators-lkr79\" (UID: \"5c34fd8f-70a3-455f-8281-c116ac9bab11\") " pod="openshift-marketplace/redhat-operators-lkr79" Jan 23 17:12:04 crc kubenswrapper[4718]: I0123 17:12:04.122640 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lkr79"] Jan 23 17:12:04 crc kubenswrapper[4718]: I0123 17:12:04.199286 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c34fd8f-70a3-455f-8281-c116ac9bab11-utilities\") pod \"redhat-operators-lkr79\" (UID: \"5c34fd8f-70a3-455f-8281-c116ac9bab11\") " pod="openshift-marketplace/redhat-operators-lkr79" Jan 23 17:12:04 crc kubenswrapper[4718]: I0123 17:12:04.199694 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c34fd8f-70a3-455f-8281-c116ac9bab11-catalog-content\") pod \"redhat-operators-lkr79\" (UID: \"5c34fd8f-70a3-455f-8281-c116ac9bab11\") " pod="openshift-marketplace/redhat-operators-lkr79" Jan 23 17:12:04 crc kubenswrapper[4718]: I0123 17:12:04.199728 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrfht\" (UniqueName: \"kubernetes.io/projected/5c34fd8f-70a3-455f-8281-c116ac9bab11-kube-api-access-xrfht\") pod \"redhat-operators-lkr79\" (UID: \"5c34fd8f-70a3-455f-8281-c116ac9bab11\") " pod="openshift-marketplace/redhat-operators-lkr79" Jan 23 17:12:04 crc kubenswrapper[4718]: I0123 17:12:04.199956 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c34fd8f-70a3-455f-8281-c116ac9bab11-utilities\") pod \"redhat-operators-lkr79\" (UID: \"5c34fd8f-70a3-455f-8281-c116ac9bab11\") " pod="openshift-marketplace/redhat-operators-lkr79" Jan 23 17:12:04 crc kubenswrapper[4718]: I0123 17:12:04.200418 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c34fd8f-70a3-455f-8281-c116ac9bab11-catalog-content\") pod \"redhat-operators-lkr79\" (UID: \"5c34fd8f-70a3-455f-8281-c116ac9bab11\") " pod="openshift-marketplace/redhat-operators-lkr79" Jan 23 17:12:04 crc kubenswrapper[4718]: I0123 17:12:04.234670 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrfht\" (UniqueName: \"kubernetes.io/projected/5c34fd8f-70a3-455f-8281-c116ac9bab11-kube-api-access-xrfht\") pod \"redhat-operators-lkr79\" (UID: \"5c34fd8f-70a3-455f-8281-c116ac9bab11\") " pod="openshift-marketplace/redhat-operators-lkr79" Jan 23 17:12:04 crc kubenswrapper[4718]: I0123 17:12:04.433548 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkr79" Jan 23 17:12:04 crc kubenswrapper[4718]: I0123 17:12:04.908566 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lkr79"] Jan 23 17:12:05 crc kubenswrapper[4718]: I0123 17:12:05.531707 4718 generic.go:334] "Generic (PLEG): container finished" podID="5c34fd8f-70a3-455f-8281-c116ac9bab11" containerID="0d55acc57060e08a25fa892cb65504b56d1e472cee35a87ecc33e79ca691af59" exitCode=0 Jan 23 17:12:05 crc kubenswrapper[4718]: I0123 17:12:05.531746 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkr79" event={"ID":"5c34fd8f-70a3-455f-8281-c116ac9bab11","Type":"ContainerDied","Data":"0d55acc57060e08a25fa892cb65504b56d1e472cee35a87ecc33e79ca691af59"} Jan 23 17:12:05 crc kubenswrapper[4718]: I0123 17:12:05.531771 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkr79" event={"ID":"5c34fd8f-70a3-455f-8281-c116ac9bab11","Type":"ContainerStarted","Data":"7fe8145a360a123f03c50f52a457b378372f68f4f88a29802f084752459b41a7"} Jan 23 17:12:05 crc kubenswrapper[4718]: I0123 17:12:05.534230 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:12:06 crc kubenswrapper[4718]: I0123 17:12:06.544325 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkr79" event={"ID":"5c34fd8f-70a3-455f-8281-c116ac9bab11","Type":"ContainerStarted","Data":"8355f1ab5b28c3d6f4abc1e9241ea24780548c1b3b99f9b032f20dda8cdb3e9d"} Jan 23 17:12:09 crc kubenswrapper[4718]: I0123 17:12:09.578057 4718 generic.go:334] "Generic (PLEG): container finished" podID="5c34fd8f-70a3-455f-8281-c116ac9bab11" containerID="8355f1ab5b28c3d6f4abc1e9241ea24780548c1b3b99f9b032f20dda8cdb3e9d" exitCode=0 Jan 23 17:12:09 crc kubenswrapper[4718]: I0123 17:12:09.578129 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkr79" event={"ID":"5c34fd8f-70a3-455f-8281-c116ac9bab11","Type":"ContainerDied","Data":"8355f1ab5b28c3d6f4abc1e9241ea24780548c1b3b99f9b032f20dda8cdb3e9d"} Jan 23 17:12:10 crc kubenswrapper[4718]: I0123 17:12:10.596429 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkr79" event={"ID":"5c34fd8f-70a3-455f-8281-c116ac9bab11","Type":"ContainerStarted","Data":"c50c1b03e9e98d070e8a565c6bfeb958d83b86e6024d83a8d62e10114be6ba12"} Jan 23 17:12:10 crc kubenswrapper[4718]: I0123 17:12:10.625499 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lkr79" podStartSLOduration=2.105418192 podStartE2EDuration="6.625477022s" podCreationTimestamp="2026-01-23 17:12:04 +0000 UTC" firstStartedPulling="2026-01-23 17:12:05.534019422 +0000 UTC m=+3326.681261413" lastFinishedPulling="2026-01-23 17:12:10.054078252 +0000 UTC m=+3331.201320243" observedRunningTime="2026-01-23 17:12:10.614584956 +0000 UTC m=+3331.761826957" watchObservedRunningTime="2026-01-23 17:12:10.625477022 +0000 UTC m=+3331.772719013" Jan 23 17:12:14 crc kubenswrapper[4718]: I0123 17:12:14.434220 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lkr79" Jan 23 17:12:14 crc kubenswrapper[4718]: I0123 17:12:14.434551 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lkr79" Jan 23 17:12:15 crc kubenswrapper[4718]: I0123 17:12:15.491029 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkr79" podUID="5c34fd8f-70a3-455f-8281-c116ac9bab11" containerName="registry-server" probeResult="failure" output=< Jan 23 17:12:15 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Jan 23 17:12:15 crc kubenswrapper[4718]: > Jan 23 17:12:17 crc kubenswrapper[4718]: I0123 17:12:17.140771 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:12:17 crc kubenswrapper[4718]: E0123 17:12:17.141522 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:12:24 crc kubenswrapper[4718]: I0123 17:12:24.499402 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lkr79" Jan 23 17:12:24 crc kubenswrapper[4718]: I0123 17:12:24.573400 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lkr79" Jan 23 17:12:24 crc kubenswrapper[4718]: I0123 17:12:24.749161 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lkr79"] Jan 23 17:12:25 crc kubenswrapper[4718]: I0123 17:12:25.754682 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lkr79" podUID="5c34fd8f-70a3-455f-8281-c116ac9bab11" containerName="registry-server" containerID="cri-o://c50c1b03e9e98d070e8a565c6bfeb958d83b86e6024d83a8d62e10114be6ba12" gracePeriod=2 Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.290228 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkr79" Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.464753 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c34fd8f-70a3-455f-8281-c116ac9bab11-utilities\") pod \"5c34fd8f-70a3-455f-8281-c116ac9bab11\" (UID: \"5c34fd8f-70a3-455f-8281-c116ac9bab11\") " Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.464827 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c34fd8f-70a3-455f-8281-c116ac9bab11-catalog-content\") pod \"5c34fd8f-70a3-455f-8281-c116ac9bab11\" (UID: \"5c34fd8f-70a3-455f-8281-c116ac9bab11\") " Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.465020 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrfht\" (UniqueName: \"kubernetes.io/projected/5c34fd8f-70a3-455f-8281-c116ac9bab11-kube-api-access-xrfht\") pod \"5c34fd8f-70a3-455f-8281-c116ac9bab11\" (UID: \"5c34fd8f-70a3-455f-8281-c116ac9bab11\") " Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.466011 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c34fd8f-70a3-455f-8281-c116ac9bab11-utilities" (OuterVolumeSpecName: "utilities") pod "5c34fd8f-70a3-455f-8281-c116ac9bab11" (UID: "5c34fd8f-70a3-455f-8281-c116ac9bab11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.471439 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c34fd8f-70a3-455f-8281-c116ac9bab11-kube-api-access-xrfht" (OuterVolumeSpecName: "kube-api-access-xrfht") pod "5c34fd8f-70a3-455f-8281-c116ac9bab11" (UID: "5c34fd8f-70a3-455f-8281-c116ac9bab11"). InnerVolumeSpecName "kube-api-access-xrfht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.576175 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrfht\" (UniqueName: \"kubernetes.io/projected/5c34fd8f-70a3-455f-8281-c116ac9bab11-kube-api-access-xrfht\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.576220 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c34fd8f-70a3-455f-8281-c116ac9bab11-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.607676 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c34fd8f-70a3-455f-8281-c116ac9bab11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c34fd8f-70a3-455f-8281-c116ac9bab11" (UID: "5c34fd8f-70a3-455f-8281-c116ac9bab11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.677074 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c34fd8f-70a3-455f-8281-c116ac9bab11-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.768243 4718 generic.go:334] "Generic (PLEG): container finished" podID="5c34fd8f-70a3-455f-8281-c116ac9bab11" containerID="c50c1b03e9e98d070e8a565c6bfeb958d83b86e6024d83a8d62e10114be6ba12" exitCode=0 Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.768492 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkr79" Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.768487 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkr79" event={"ID":"5c34fd8f-70a3-455f-8281-c116ac9bab11","Type":"ContainerDied","Data":"c50c1b03e9e98d070e8a565c6bfeb958d83b86e6024d83a8d62e10114be6ba12"} Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.768539 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkr79" event={"ID":"5c34fd8f-70a3-455f-8281-c116ac9bab11","Type":"ContainerDied","Data":"7fe8145a360a123f03c50f52a457b378372f68f4f88a29802f084752459b41a7"} Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.768577 4718 scope.go:117] "RemoveContainer" containerID="c50c1b03e9e98d070e8a565c6bfeb958d83b86e6024d83a8d62e10114be6ba12" Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.816502 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lkr79"] Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.824276 4718 scope.go:117] "RemoveContainer" containerID="8355f1ab5b28c3d6f4abc1e9241ea24780548c1b3b99f9b032f20dda8cdb3e9d" Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.832347 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lkr79"] Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.849050 4718 scope.go:117] "RemoveContainer" containerID="0d55acc57060e08a25fa892cb65504b56d1e472cee35a87ecc33e79ca691af59" Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.911264 4718 scope.go:117] "RemoveContainer" containerID="c50c1b03e9e98d070e8a565c6bfeb958d83b86e6024d83a8d62e10114be6ba12" Jan 23 17:12:26 crc kubenswrapper[4718]: E0123 17:12:26.911739 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c50c1b03e9e98d070e8a565c6bfeb958d83b86e6024d83a8d62e10114be6ba12\": container with ID starting with c50c1b03e9e98d070e8a565c6bfeb958d83b86e6024d83a8d62e10114be6ba12 not found: ID does not exist" containerID="c50c1b03e9e98d070e8a565c6bfeb958d83b86e6024d83a8d62e10114be6ba12" Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.911790 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c50c1b03e9e98d070e8a565c6bfeb958d83b86e6024d83a8d62e10114be6ba12"} err="failed to get container status \"c50c1b03e9e98d070e8a565c6bfeb958d83b86e6024d83a8d62e10114be6ba12\": rpc error: code = NotFound desc = could not find container \"c50c1b03e9e98d070e8a565c6bfeb958d83b86e6024d83a8d62e10114be6ba12\": container with ID starting with c50c1b03e9e98d070e8a565c6bfeb958d83b86e6024d83a8d62e10114be6ba12 not found: ID does not exist" Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.911825 4718 scope.go:117] "RemoveContainer" containerID="8355f1ab5b28c3d6f4abc1e9241ea24780548c1b3b99f9b032f20dda8cdb3e9d" Jan 23 17:12:26 crc kubenswrapper[4718]: E0123 17:12:26.913121 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8355f1ab5b28c3d6f4abc1e9241ea24780548c1b3b99f9b032f20dda8cdb3e9d\": container with ID starting with 8355f1ab5b28c3d6f4abc1e9241ea24780548c1b3b99f9b032f20dda8cdb3e9d not found: ID does not exist" containerID="8355f1ab5b28c3d6f4abc1e9241ea24780548c1b3b99f9b032f20dda8cdb3e9d" Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.913153 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8355f1ab5b28c3d6f4abc1e9241ea24780548c1b3b99f9b032f20dda8cdb3e9d"} err="failed to get container status \"8355f1ab5b28c3d6f4abc1e9241ea24780548c1b3b99f9b032f20dda8cdb3e9d\": rpc error: code = NotFound desc = could not find container \"8355f1ab5b28c3d6f4abc1e9241ea24780548c1b3b99f9b032f20dda8cdb3e9d\": container with ID starting with 8355f1ab5b28c3d6f4abc1e9241ea24780548c1b3b99f9b032f20dda8cdb3e9d not found: ID does not exist" Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.913193 4718 scope.go:117] "RemoveContainer" containerID="0d55acc57060e08a25fa892cb65504b56d1e472cee35a87ecc33e79ca691af59" Jan 23 17:12:26 crc kubenswrapper[4718]: E0123 17:12:26.913856 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d55acc57060e08a25fa892cb65504b56d1e472cee35a87ecc33e79ca691af59\": container with ID starting with 0d55acc57060e08a25fa892cb65504b56d1e472cee35a87ecc33e79ca691af59 not found: ID does not exist" containerID="0d55acc57060e08a25fa892cb65504b56d1e472cee35a87ecc33e79ca691af59" Jan 23 17:12:26 crc kubenswrapper[4718]: I0123 17:12:26.913960 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d55acc57060e08a25fa892cb65504b56d1e472cee35a87ecc33e79ca691af59"} err="failed to get container status \"0d55acc57060e08a25fa892cb65504b56d1e472cee35a87ecc33e79ca691af59\": rpc error: code = NotFound desc = could not find container \"0d55acc57060e08a25fa892cb65504b56d1e472cee35a87ecc33e79ca691af59\": container with ID starting with 0d55acc57060e08a25fa892cb65504b56d1e472cee35a87ecc33e79ca691af59 not found: ID does not exist" Jan 23 17:12:27 crc kubenswrapper[4718]: I0123 17:12:27.154312 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c34fd8f-70a3-455f-8281-c116ac9bab11" path="/var/lib/kubelet/pods/5c34fd8f-70a3-455f-8281-c116ac9bab11/volumes" Jan 23 17:12:28 crc kubenswrapper[4718]: I0123 17:12:28.141780 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:12:28 crc kubenswrapper[4718]: E0123 17:12:28.142538 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:12:41 crc kubenswrapper[4718]: I0123 17:12:41.140305 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:12:41 crc kubenswrapper[4718]: E0123 17:12:41.141116 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:12:53 crc kubenswrapper[4718]: I0123 17:12:53.141379 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:12:53 crc kubenswrapper[4718]: E0123 17:12:53.143378 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:13:06 crc kubenswrapper[4718]: I0123 17:13:06.142102 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:13:06 crc kubenswrapper[4718]: E0123 17:13:06.143450 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:13:15 crc kubenswrapper[4718]: I0123 17:13:15.484335 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tmx6g"] Jan 23 17:13:15 crc kubenswrapper[4718]: E0123 17:13:15.486401 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c34fd8f-70a3-455f-8281-c116ac9bab11" containerName="extract-utilities" Jan 23 17:13:15 crc kubenswrapper[4718]: I0123 17:13:15.486435 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c34fd8f-70a3-455f-8281-c116ac9bab11" containerName="extract-utilities" Jan 23 17:13:15 crc kubenswrapper[4718]: E0123 17:13:15.486468 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c34fd8f-70a3-455f-8281-c116ac9bab11" containerName="registry-server" Jan 23 17:13:15 crc kubenswrapper[4718]: I0123 17:13:15.486482 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c34fd8f-70a3-455f-8281-c116ac9bab11" containerName="registry-server" Jan 23 17:13:15 crc kubenswrapper[4718]: E0123 17:13:15.486514 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c34fd8f-70a3-455f-8281-c116ac9bab11" containerName="extract-content" Jan 23 17:13:15 crc kubenswrapper[4718]: I0123 17:13:15.486528 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c34fd8f-70a3-455f-8281-c116ac9bab11" containerName="extract-content" Jan 23 17:13:15 crc kubenswrapper[4718]: I0123 17:13:15.487087 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c34fd8f-70a3-455f-8281-c116ac9bab11" containerName="registry-server" Jan 23 17:13:15 crc kubenswrapper[4718]: I0123 17:13:15.490531 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmx6g" Jan 23 17:13:15 crc kubenswrapper[4718]: I0123 17:13:15.499470 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tmx6g"] Jan 23 17:13:15 crc kubenswrapper[4718]: I0123 17:13:15.604690 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9kkm\" (UniqueName: \"kubernetes.io/projected/90162572-4727-4dd1-b90e-af86f19550ec-kube-api-access-r9kkm\") pod \"community-operators-tmx6g\" (UID: \"90162572-4727-4dd1-b90e-af86f19550ec\") " pod="openshift-marketplace/community-operators-tmx6g" Jan 23 17:13:15 crc kubenswrapper[4718]: I0123 17:13:15.604769 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90162572-4727-4dd1-b90e-af86f19550ec-catalog-content\") pod \"community-operators-tmx6g\" (UID: \"90162572-4727-4dd1-b90e-af86f19550ec\") " pod="openshift-marketplace/community-operators-tmx6g" Jan 23 17:13:15 crc kubenswrapper[4718]: I0123 17:13:15.604806 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90162572-4727-4dd1-b90e-af86f19550ec-utilities\") pod \"community-operators-tmx6g\" (UID: \"90162572-4727-4dd1-b90e-af86f19550ec\") " pod="openshift-marketplace/community-operators-tmx6g" Jan 23 17:13:15 crc kubenswrapper[4718]: I0123 17:13:15.706766 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90162572-4727-4dd1-b90e-af86f19550ec-catalog-content\") pod \"community-operators-tmx6g\" (UID: \"90162572-4727-4dd1-b90e-af86f19550ec\") " pod="openshift-marketplace/community-operators-tmx6g" Jan 23 17:13:15 crc kubenswrapper[4718]: I0123 17:13:15.706856 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90162572-4727-4dd1-b90e-af86f19550ec-utilities\") pod \"community-operators-tmx6g\" (UID: \"90162572-4727-4dd1-b90e-af86f19550ec\") " pod="openshift-marketplace/community-operators-tmx6g" Jan 23 17:13:15 crc kubenswrapper[4718]: I0123 17:13:15.707128 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9kkm\" (UniqueName: \"kubernetes.io/projected/90162572-4727-4dd1-b90e-af86f19550ec-kube-api-access-r9kkm\") pod \"community-operators-tmx6g\" (UID: \"90162572-4727-4dd1-b90e-af86f19550ec\") " pod="openshift-marketplace/community-operators-tmx6g" Jan 23 17:13:15 crc kubenswrapper[4718]: I0123 17:13:15.707352 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90162572-4727-4dd1-b90e-af86f19550ec-catalog-content\") pod \"community-operators-tmx6g\" (UID: \"90162572-4727-4dd1-b90e-af86f19550ec\") " pod="openshift-marketplace/community-operators-tmx6g" Jan 23 17:13:15 crc kubenswrapper[4718]: I0123 17:13:15.707378 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90162572-4727-4dd1-b90e-af86f19550ec-utilities\") pod \"community-operators-tmx6g\" (UID: \"90162572-4727-4dd1-b90e-af86f19550ec\") " pod="openshift-marketplace/community-operators-tmx6g" Jan 23 17:13:15 crc kubenswrapper[4718]: I0123 17:13:15.736757 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9kkm\" (UniqueName: \"kubernetes.io/projected/90162572-4727-4dd1-b90e-af86f19550ec-kube-api-access-r9kkm\") pod \"community-operators-tmx6g\" (UID: \"90162572-4727-4dd1-b90e-af86f19550ec\") " pod="openshift-marketplace/community-operators-tmx6g" Jan 23 17:13:15 crc kubenswrapper[4718]: I0123 17:13:15.829074 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmx6g" Jan 23 17:13:16 crc kubenswrapper[4718]: I0123 17:13:16.346499 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tmx6g"] Jan 23 17:13:17 crc kubenswrapper[4718]: I0123 17:13:17.336458 4718 generic.go:334] "Generic (PLEG): container finished" podID="90162572-4727-4dd1-b90e-af86f19550ec" containerID="c592aadece462ec71a1b1da6fe6e26a2ee934f29d1b41cc0f448de4fbde3a861" exitCode=0 Jan 23 17:13:17 crc kubenswrapper[4718]: I0123 17:13:17.336609 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmx6g" event={"ID":"90162572-4727-4dd1-b90e-af86f19550ec","Type":"ContainerDied","Data":"c592aadece462ec71a1b1da6fe6e26a2ee934f29d1b41cc0f448de4fbde3a861"} Jan 23 17:13:17 crc kubenswrapper[4718]: I0123 17:13:17.337097 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmx6g" event={"ID":"90162572-4727-4dd1-b90e-af86f19550ec","Type":"ContainerStarted","Data":"0476452f756a1e1f230e03edc103ed68f4beefe2ba7a69ca57caf4592caefb30"} Jan 23 17:13:18 crc kubenswrapper[4718]: I0123 17:13:18.143754 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:13:18 crc kubenswrapper[4718]: E0123 17:13:18.144122 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:13:19 crc kubenswrapper[4718]: I0123 17:13:19.357514 4718 generic.go:334] "Generic (PLEG): container finished" podID="90162572-4727-4dd1-b90e-af86f19550ec" containerID="4f566c8bbe2ab804a0370de9ff1452582b26293143d57d551485dd1b66f25fa2" exitCode=0 Jan 23 17:13:19 crc kubenswrapper[4718]: I0123 17:13:19.357554 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmx6g" event={"ID":"90162572-4727-4dd1-b90e-af86f19550ec","Type":"ContainerDied","Data":"4f566c8bbe2ab804a0370de9ff1452582b26293143d57d551485dd1b66f25fa2"} Jan 23 17:13:20 crc kubenswrapper[4718]: I0123 17:13:20.376602 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmx6g" event={"ID":"90162572-4727-4dd1-b90e-af86f19550ec","Type":"ContainerStarted","Data":"18b818b33863a59e7344c2959a1fdd95e76bb222a05f0ecf308204126b626290"} Jan 23 17:13:20 crc kubenswrapper[4718]: I0123 17:13:20.412959 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tmx6g" podStartSLOduration=2.8229764729999998 podStartE2EDuration="5.412937021s" podCreationTimestamp="2026-01-23 17:13:15 +0000 UTC" firstStartedPulling="2026-01-23 17:13:17.339504734 +0000 UTC m=+3398.486746735" lastFinishedPulling="2026-01-23 17:13:19.929465292 +0000 UTC m=+3401.076707283" observedRunningTime="2026-01-23 17:13:20.403206026 +0000 UTC m=+3401.550448027" watchObservedRunningTime="2026-01-23 17:13:20.412937021 +0000 UTC m=+3401.560179032" Jan 23 17:13:25 crc kubenswrapper[4718]: I0123 17:13:25.829949 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tmx6g" Jan 23 17:13:25 crc kubenswrapper[4718]: I0123 17:13:25.830450 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tmx6g" Jan 23 17:13:25 crc kubenswrapper[4718]: I0123 17:13:25.898261 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tmx6g" Jan 23 17:13:26 crc kubenswrapper[4718]: I0123 17:13:26.492138 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tmx6g" Jan 23 17:13:26 crc kubenswrapper[4718]: I0123 17:13:26.547675 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tmx6g"] Jan 23 17:13:28 crc kubenswrapper[4718]: I0123 17:13:28.461986 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tmx6g" podUID="90162572-4727-4dd1-b90e-af86f19550ec" containerName="registry-server" containerID="cri-o://18b818b33863a59e7344c2959a1fdd95e76bb222a05f0ecf308204126b626290" gracePeriod=2 Jan 23 17:13:28 crc kubenswrapper[4718]: I0123 17:13:28.993070 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmx6g" Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.050026 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9kkm\" (UniqueName: \"kubernetes.io/projected/90162572-4727-4dd1-b90e-af86f19550ec-kube-api-access-r9kkm\") pod \"90162572-4727-4dd1-b90e-af86f19550ec\" (UID: \"90162572-4727-4dd1-b90e-af86f19550ec\") " Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.050504 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90162572-4727-4dd1-b90e-af86f19550ec-utilities\") pod \"90162572-4727-4dd1-b90e-af86f19550ec\" (UID: \"90162572-4727-4dd1-b90e-af86f19550ec\") " Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.050657 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90162572-4727-4dd1-b90e-af86f19550ec-catalog-content\") pod \"90162572-4727-4dd1-b90e-af86f19550ec\" (UID: \"90162572-4727-4dd1-b90e-af86f19550ec\") " Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.051678 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90162572-4727-4dd1-b90e-af86f19550ec-utilities" (OuterVolumeSpecName: "utilities") pod "90162572-4727-4dd1-b90e-af86f19550ec" (UID: "90162572-4727-4dd1-b90e-af86f19550ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.057470 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90162572-4727-4dd1-b90e-af86f19550ec-kube-api-access-r9kkm" (OuterVolumeSpecName: "kube-api-access-r9kkm") pod "90162572-4727-4dd1-b90e-af86f19550ec" (UID: "90162572-4727-4dd1-b90e-af86f19550ec"). InnerVolumeSpecName "kube-api-access-r9kkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.153222 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90162572-4727-4dd1-b90e-af86f19550ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.153261 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9kkm\" (UniqueName: \"kubernetes.io/projected/90162572-4727-4dd1-b90e-af86f19550ec-kube-api-access-r9kkm\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.355950 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90162572-4727-4dd1-b90e-af86f19550ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90162572-4727-4dd1-b90e-af86f19550ec" (UID: "90162572-4727-4dd1-b90e-af86f19550ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.360566 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90162572-4727-4dd1-b90e-af86f19550ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.476977 4718 generic.go:334] "Generic (PLEG): container finished" podID="90162572-4727-4dd1-b90e-af86f19550ec" containerID="18b818b33863a59e7344c2959a1fdd95e76bb222a05f0ecf308204126b626290" exitCode=0 Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.477048 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmx6g" Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.477051 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmx6g" event={"ID":"90162572-4727-4dd1-b90e-af86f19550ec","Type":"ContainerDied","Data":"18b818b33863a59e7344c2959a1fdd95e76bb222a05f0ecf308204126b626290"} Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.477494 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmx6g" event={"ID":"90162572-4727-4dd1-b90e-af86f19550ec","Type":"ContainerDied","Data":"0476452f756a1e1f230e03edc103ed68f4beefe2ba7a69ca57caf4592caefb30"} Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.477550 4718 scope.go:117] "RemoveContainer" containerID="18b818b33863a59e7344c2959a1fdd95e76bb222a05f0ecf308204126b626290" Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.509447 4718 scope.go:117] "RemoveContainer" containerID="4f566c8bbe2ab804a0370de9ff1452582b26293143d57d551485dd1b66f25fa2" Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.516450 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tmx6g"] Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.531435 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tmx6g"] Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.539205 4718 scope.go:117] "RemoveContainer" containerID="c592aadece462ec71a1b1da6fe6e26a2ee934f29d1b41cc0f448de4fbde3a861" Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.605484 4718 scope.go:117] "RemoveContainer" containerID="18b818b33863a59e7344c2959a1fdd95e76bb222a05f0ecf308204126b626290" Jan 23 17:13:29 crc kubenswrapper[4718]: E0123 17:13:29.606079 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18b818b33863a59e7344c2959a1fdd95e76bb222a05f0ecf308204126b626290\": container with ID starting with 18b818b33863a59e7344c2959a1fdd95e76bb222a05f0ecf308204126b626290 not found: ID does not exist" containerID="18b818b33863a59e7344c2959a1fdd95e76bb222a05f0ecf308204126b626290" Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.606189 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18b818b33863a59e7344c2959a1fdd95e76bb222a05f0ecf308204126b626290"} err="failed to get container status \"18b818b33863a59e7344c2959a1fdd95e76bb222a05f0ecf308204126b626290\": rpc error: code = NotFound desc = could not find container \"18b818b33863a59e7344c2959a1fdd95e76bb222a05f0ecf308204126b626290\": container with ID starting with 18b818b33863a59e7344c2959a1fdd95e76bb222a05f0ecf308204126b626290 not found: ID does not exist" Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.606219 4718 scope.go:117] "RemoveContainer" containerID="4f566c8bbe2ab804a0370de9ff1452582b26293143d57d551485dd1b66f25fa2" Jan 23 17:13:29 crc kubenswrapper[4718]: E0123 17:13:29.606652 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f566c8bbe2ab804a0370de9ff1452582b26293143d57d551485dd1b66f25fa2\": container with ID starting with 4f566c8bbe2ab804a0370de9ff1452582b26293143d57d551485dd1b66f25fa2 not found: ID does not exist" containerID="4f566c8bbe2ab804a0370de9ff1452582b26293143d57d551485dd1b66f25fa2" Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.606678 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f566c8bbe2ab804a0370de9ff1452582b26293143d57d551485dd1b66f25fa2"} err="failed to get container status \"4f566c8bbe2ab804a0370de9ff1452582b26293143d57d551485dd1b66f25fa2\": rpc error: code = NotFound desc = could not find container \"4f566c8bbe2ab804a0370de9ff1452582b26293143d57d551485dd1b66f25fa2\": container with ID starting with 4f566c8bbe2ab804a0370de9ff1452582b26293143d57d551485dd1b66f25fa2 not found: ID does not exist" Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.606695 4718 scope.go:117] "RemoveContainer" containerID="c592aadece462ec71a1b1da6fe6e26a2ee934f29d1b41cc0f448de4fbde3a861" Jan 23 17:13:29 crc kubenswrapper[4718]: E0123 17:13:29.607139 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c592aadece462ec71a1b1da6fe6e26a2ee934f29d1b41cc0f448de4fbde3a861\": container with ID starting with c592aadece462ec71a1b1da6fe6e26a2ee934f29d1b41cc0f448de4fbde3a861 not found: ID does not exist" containerID="c592aadece462ec71a1b1da6fe6e26a2ee934f29d1b41cc0f448de4fbde3a861" Jan 23 17:13:29 crc kubenswrapper[4718]: I0123 17:13:29.607186 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c592aadece462ec71a1b1da6fe6e26a2ee934f29d1b41cc0f448de4fbde3a861"} err="failed to get container status \"c592aadece462ec71a1b1da6fe6e26a2ee934f29d1b41cc0f448de4fbde3a861\": rpc error: code = NotFound desc = could not find container \"c592aadece462ec71a1b1da6fe6e26a2ee934f29d1b41cc0f448de4fbde3a861\": container with ID starting with c592aadece462ec71a1b1da6fe6e26a2ee934f29d1b41cc0f448de4fbde3a861 not found: ID does not exist" Jan 23 17:13:31 crc kubenswrapper[4718]: I0123 17:13:31.157539 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90162572-4727-4dd1-b90e-af86f19550ec" path="/var/lib/kubelet/pods/90162572-4727-4dd1-b90e-af86f19550ec/volumes" Jan 23 17:13:33 crc kubenswrapper[4718]: I0123 17:13:33.141095 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:13:33 crc kubenswrapper[4718]: E0123 17:13:33.141963 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:13:39 crc kubenswrapper[4718]: I0123 17:13:39.876821 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4hj46"] Jan 23 17:13:39 crc kubenswrapper[4718]: E0123 17:13:39.877774 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90162572-4727-4dd1-b90e-af86f19550ec" containerName="extract-utilities" Jan 23 17:13:39 crc kubenswrapper[4718]: I0123 17:13:39.877787 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="90162572-4727-4dd1-b90e-af86f19550ec" containerName="extract-utilities" Jan 23 17:13:39 crc kubenswrapper[4718]: E0123 17:13:39.877805 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90162572-4727-4dd1-b90e-af86f19550ec" containerName="registry-server" Jan 23 17:13:39 crc kubenswrapper[4718]: I0123 17:13:39.877812 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="90162572-4727-4dd1-b90e-af86f19550ec" containerName="registry-server" Jan 23 17:13:39 crc kubenswrapper[4718]: E0123 17:13:39.877841 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90162572-4727-4dd1-b90e-af86f19550ec" containerName="extract-content" Jan 23 17:13:39 crc kubenswrapper[4718]: I0123 17:13:39.877847 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="90162572-4727-4dd1-b90e-af86f19550ec" containerName="extract-content" Jan 23 17:13:39 crc kubenswrapper[4718]: I0123 17:13:39.878059 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="90162572-4727-4dd1-b90e-af86f19550ec" containerName="registry-server" Jan 23 17:13:39 crc kubenswrapper[4718]: I0123 17:13:39.879612 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4hj46" Jan 23 17:13:39 crc kubenswrapper[4718]: I0123 17:13:39.898691 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4hj46"] Jan 23 17:13:40 crc kubenswrapper[4718]: I0123 17:13:40.043280 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmmrx\" (UniqueName: \"kubernetes.io/projected/12a65bbd-a659-47e1-a1b8-243c7e1dfdef-kube-api-access-zmmrx\") pod \"certified-operators-4hj46\" (UID: \"12a65bbd-a659-47e1-a1b8-243c7e1dfdef\") " pod="openshift-marketplace/certified-operators-4hj46" Jan 23 17:13:40 crc kubenswrapper[4718]: I0123 17:13:40.043483 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12a65bbd-a659-47e1-a1b8-243c7e1dfdef-utilities\") pod \"certified-operators-4hj46\" (UID: \"12a65bbd-a659-47e1-a1b8-243c7e1dfdef\") " pod="openshift-marketplace/certified-operators-4hj46" Jan 23 17:13:40 crc kubenswrapper[4718]: I0123 17:13:40.043569 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12a65bbd-a659-47e1-a1b8-243c7e1dfdef-catalog-content\") pod \"certified-operators-4hj46\" (UID: \"12a65bbd-a659-47e1-a1b8-243c7e1dfdef\") " pod="openshift-marketplace/certified-operators-4hj46" Jan 23 17:13:40 crc kubenswrapper[4718]: I0123 17:13:40.146079 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmmrx\" (UniqueName: \"kubernetes.io/projected/12a65bbd-a659-47e1-a1b8-243c7e1dfdef-kube-api-access-zmmrx\") pod \"certified-operators-4hj46\" (UID: \"12a65bbd-a659-47e1-a1b8-243c7e1dfdef\") " pod="openshift-marketplace/certified-operators-4hj46" Jan 23 17:13:40 crc kubenswrapper[4718]: I0123 17:13:40.146254 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12a65bbd-a659-47e1-a1b8-243c7e1dfdef-utilities\") pod \"certified-operators-4hj46\" (UID: \"12a65bbd-a659-47e1-a1b8-243c7e1dfdef\") " pod="openshift-marketplace/certified-operators-4hj46" Jan 23 17:13:40 crc kubenswrapper[4718]: I0123 17:13:40.146333 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12a65bbd-a659-47e1-a1b8-243c7e1dfdef-catalog-content\") pod \"certified-operators-4hj46\" (UID: \"12a65bbd-a659-47e1-a1b8-243c7e1dfdef\") " pod="openshift-marketplace/certified-operators-4hj46" Jan 23 17:13:40 crc kubenswrapper[4718]: I0123 17:13:40.146858 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12a65bbd-a659-47e1-a1b8-243c7e1dfdef-utilities\") pod \"certified-operators-4hj46\" (UID: \"12a65bbd-a659-47e1-a1b8-243c7e1dfdef\") " pod="openshift-marketplace/certified-operators-4hj46" Jan 23 17:13:40 crc kubenswrapper[4718]: I0123 17:13:40.146902 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12a65bbd-a659-47e1-a1b8-243c7e1dfdef-catalog-content\") pod \"certified-operators-4hj46\" (UID: \"12a65bbd-a659-47e1-a1b8-243c7e1dfdef\") " pod="openshift-marketplace/certified-operators-4hj46" Jan 23 17:13:40 crc kubenswrapper[4718]: I0123 17:13:40.170431 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmmrx\" (UniqueName: \"kubernetes.io/projected/12a65bbd-a659-47e1-a1b8-243c7e1dfdef-kube-api-access-zmmrx\") pod \"certified-operators-4hj46\" (UID: \"12a65bbd-a659-47e1-a1b8-243c7e1dfdef\") " pod="openshift-marketplace/certified-operators-4hj46" Jan 23 17:13:40 crc kubenswrapper[4718]: I0123 17:13:40.202819 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4hj46" Jan 23 17:13:40 crc kubenswrapper[4718]: I0123 17:13:40.803955 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4hj46"] Jan 23 17:13:41 crc kubenswrapper[4718]: I0123 17:13:41.639435 4718 generic.go:334] "Generic (PLEG): container finished" podID="12a65bbd-a659-47e1-a1b8-243c7e1dfdef" containerID="d0cfdb61ffc098b6de73008db43b6a747d32880bd56bfd8a5e93648f9c8cc536" exitCode=0 Jan 23 17:13:41 crc kubenswrapper[4718]: I0123 17:13:41.639796 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hj46" event={"ID":"12a65bbd-a659-47e1-a1b8-243c7e1dfdef","Type":"ContainerDied","Data":"d0cfdb61ffc098b6de73008db43b6a747d32880bd56bfd8a5e93648f9c8cc536"} Jan 23 17:13:41 crc kubenswrapper[4718]: I0123 17:13:41.639845 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hj46" event={"ID":"12a65bbd-a659-47e1-a1b8-243c7e1dfdef","Type":"ContainerStarted","Data":"2af7ccce257bf10b1c8882ca56a3e9c593f903862152a458d2711e799b6251e9"} Jan 23 17:13:42 crc kubenswrapper[4718]: I0123 17:13:42.657329 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hj46" event={"ID":"12a65bbd-a659-47e1-a1b8-243c7e1dfdef","Type":"ContainerStarted","Data":"f4b0718f207a9b4e364a32197d783bfb79135150415382ca7e0225499f24b2ad"} Jan 23 17:13:43 crc kubenswrapper[4718]: I0123 17:13:43.671167 4718 generic.go:334] "Generic (PLEG): container finished" podID="12a65bbd-a659-47e1-a1b8-243c7e1dfdef" containerID="f4b0718f207a9b4e364a32197d783bfb79135150415382ca7e0225499f24b2ad" exitCode=0 Jan 23 17:13:43 crc kubenswrapper[4718]: I0123 17:13:43.671376 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hj46" event={"ID":"12a65bbd-a659-47e1-a1b8-243c7e1dfdef","Type":"ContainerDied","Data":"f4b0718f207a9b4e364a32197d783bfb79135150415382ca7e0225499f24b2ad"} Jan 23 17:13:45 crc kubenswrapper[4718]: I0123 17:13:45.694325 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hj46" event={"ID":"12a65bbd-a659-47e1-a1b8-243c7e1dfdef","Type":"ContainerStarted","Data":"6fcdcbeaf8a637d6a365301bcc4cfa12a790aeb5934f28a8656bab7fe8fddda7"} Jan 23 17:13:45 crc kubenswrapper[4718]: I0123 17:13:45.715555 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4hj46" podStartSLOduration=3.682730684 podStartE2EDuration="6.715536957s" podCreationTimestamp="2026-01-23 17:13:39 +0000 UTC" firstStartedPulling="2026-01-23 17:13:41.642258447 +0000 UTC m=+3422.789500438" lastFinishedPulling="2026-01-23 17:13:44.67506472 +0000 UTC m=+3425.822306711" observedRunningTime="2026-01-23 17:13:45.71158998 +0000 UTC m=+3426.858831991" watchObservedRunningTime="2026-01-23 17:13:45.715536957 +0000 UTC m=+3426.862778948" Jan 23 17:13:46 crc kubenswrapper[4718]: I0123 17:13:46.141225 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:13:46 crc kubenswrapper[4718]: E0123 17:13:46.141513 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:13:50 crc kubenswrapper[4718]: I0123 17:13:50.203697 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4hj46" Jan 23 17:13:50 crc kubenswrapper[4718]: I0123 17:13:50.205226 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4hj46" Jan 23 17:13:50 crc kubenswrapper[4718]: I0123 17:13:50.252403 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4hj46" Jan 23 17:13:50 crc kubenswrapper[4718]: I0123 17:13:50.802954 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4hj46" Jan 23 17:13:50 crc kubenswrapper[4718]: I0123 17:13:50.859172 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4hj46"] Jan 23 17:13:52 crc kubenswrapper[4718]: I0123 17:13:52.762653 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4hj46" podUID="12a65bbd-a659-47e1-a1b8-243c7e1dfdef" containerName="registry-server" containerID="cri-o://6fcdcbeaf8a637d6a365301bcc4cfa12a790aeb5934f28a8656bab7fe8fddda7" gracePeriod=2 Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.302012 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4hj46" Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.363354 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmmrx\" (UniqueName: \"kubernetes.io/projected/12a65bbd-a659-47e1-a1b8-243c7e1dfdef-kube-api-access-zmmrx\") pod \"12a65bbd-a659-47e1-a1b8-243c7e1dfdef\" (UID: \"12a65bbd-a659-47e1-a1b8-243c7e1dfdef\") " Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.363508 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12a65bbd-a659-47e1-a1b8-243c7e1dfdef-utilities\") pod \"12a65bbd-a659-47e1-a1b8-243c7e1dfdef\" (UID: \"12a65bbd-a659-47e1-a1b8-243c7e1dfdef\") " Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.363618 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12a65bbd-a659-47e1-a1b8-243c7e1dfdef-catalog-content\") pod \"12a65bbd-a659-47e1-a1b8-243c7e1dfdef\" (UID: \"12a65bbd-a659-47e1-a1b8-243c7e1dfdef\") " Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.364271 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12a65bbd-a659-47e1-a1b8-243c7e1dfdef-utilities" (OuterVolumeSpecName: "utilities") pod "12a65bbd-a659-47e1-a1b8-243c7e1dfdef" (UID: "12a65bbd-a659-47e1-a1b8-243c7e1dfdef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.364885 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12a65bbd-a659-47e1-a1b8-243c7e1dfdef-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.369737 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12a65bbd-a659-47e1-a1b8-243c7e1dfdef-kube-api-access-zmmrx" (OuterVolumeSpecName: "kube-api-access-zmmrx") pod "12a65bbd-a659-47e1-a1b8-243c7e1dfdef" (UID: "12a65bbd-a659-47e1-a1b8-243c7e1dfdef"). InnerVolumeSpecName "kube-api-access-zmmrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.410139 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12a65bbd-a659-47e1-a1b8-243c7e1dfdef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12a65bbd-a659-47e1-a1b8-243c7e1dfdef" (UID: "12a65bbd-a659-47e1-a1b8-243c7e1dfdef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.466821 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12a65bbd-a659-47e1-a1b8-243c7e1dfdef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.466854 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmmrx\" (UniqueName: \"kubernetes.io/projected/12a65bbd-a659-47e1-a1b8-243c7e1dfdef-kube-api-access-zmmrx\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.774479 4718 generic.go:334] "Generic (PLEG): container finished" podID="12a65bbd-a659-47e1-a1b8-243c7e1dfdef" containerID="6fcdcbeaf8a637d6a365301bcc4cfa12a790aeb5934f28a8656bab7fe8fddda7" exitCode=0 Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.774540 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4hj46" Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.774556 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hj46" event={"ID":"12a65bbd-a659-47e1-a1b8-243c7e1dfdef","Type":"ContainerDied","Data":"6fcdcbeaf8a637d6a365301bcc4cfa12a790aeb5934f28a8656bab7fe8fddda7"} Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.774927 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hj46" event={"ID":"12a65bbd-a659-47e1-a1b8-243c7e1dfdef","Type":"ContainerDied","Data":"2af7ccce257bf10b1c8882ca56a3e9c593f903862152a458d2711e799b6251e9"} Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.774951 4718 scope.go:117] "RemoveContainer" containerID="6fcdcbeaf8a637d6a365301bcc4cfa12a790aeb5934f28a8656bab7fe8fddda7" Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.798485 4718 scope.go:117] "RemoveContainer" containerID="f4b0718f207a9b4e364a32197d783bfb79135150415382ca7e0225499f24b2ad" Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.809964 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4hj46"] Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.821656 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4hj46"] Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.828919 4718 scope.go:117] "RemoveContainer" containerID="d0cfdb61ffc098b6de73008db43b6a747d32880bd56bfd8a5e93648f9c8cc536" Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.904477 4718 scope.go:117] "RemoveContainer" containerID="6fcdcbeaf8a637d6a365301bcc4cfa12a790aeb5934f28a8656bab7fe8fddda7" Jan 23 17:13:53 crc kubenswrapper[4718]: E0123 17:13:53.905254 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fcdcbeaf8a637d6a365301bcc4cfa12a790aeb5934f28a8656bab7fe8fddda7\": container with ID starting with 6fcdcbeaf8a637d6a365301bcc4cfa12a790aeb5934f28a8656bab7fe8fddda7 not found: ID does not exist" containerID="6fcdcbeaf8a637d6a365301bcc4cfa12a790aeb5934f28a8656bab7fe8fddda7" Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.905357 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fcdcbeaf8a637d6a365301bcc4cfa12a790aeb5934f28a8656bab7fe8fddda7"} err="failed to get container status \"6fcdcbeaf8a637d6a365301bcc4cfa12a790aeb5934f28a8656bab7fe8fddda7\": rpc error: code = NotFound desc = could not find container \"6fcdcbeaf8a637d6a365301bcc4cfa12a790aeb5934f28a8656bab7fe8fddda7\": container with ID starting with 6fcdcbeaf8a637d6a365301bcc4cfa12a790aeb5934f28a8656bab7fe8fddda7 not found: ID does not exist" Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.905422 4718 scope.go:117] "RemoveContainer" containerID="f4b0718f207a9b4e364a32197d783bfb79135150415382ca7e0225499f24b2ad" Jan 23 17:13:53 crc kubenswrapper[4718]: E0123 17:13:53.905902 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4b0718f207a9b4e364a32197d783bfb79135150415382ca7e0225499f24b2ad\": container with ID starting with f4b0718f207a9b4e364a32197d783bfb79135150415382ca7e0225499f24b2ad not found: ID does not exist" containerID="f4b0718f207a9b4e364a32197d783bfb79135150415382ca7e0225499f24b2ad" Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.905949 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4b0718f207a9b4e364a32197d783bfb79135150415382ca7e0225499f24b2ad"} err="failed to get container status \"f4b0718f207a9b4e364a32197d783bfb79135150415382ca7e0225499f24b2ad\": rpc error: code = NotFound desc = could not find container \"f4b0718f207a9b4e364a32197d783bfb79135150415382ca7e0225499f24b2ad\": container with ID starting with f4b0718f207a9b4e364a32197d783bfb79135150415382ca7e0225499f24b2ad not found: ID does not exist" Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.905978 4718 scope.go:117] "RemoveContainer" containerID="d0cfdb61ffc098b6de73008db43b6a747d32880bd56bfd8a5e93648f9c8cc536" Jan 23 17:13:53 crc kubenswrapper[4718]: E0123 17:13:53.906286 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0cfdb61ffc098b6de73008db43b6a747d32880bd56bfd8a5e93648f9c8cc536\": container with ID starting with d0cfdb61ffc098b6de73008db43b6a747d32880bd56bfd8a5e93648f9c8cc536 not found: ID does not exist" containerID="d0cfdb61ffc098b6de73008db43b6a747d32880bd56bfd8a5e93648f9c8cc536" Jan 23 17:13:53 crc kubenswrapper[4718]: I0123 17:13:53.906329 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0cfdb61ffc098b6de73008db43b6a747d32880bd56bfd8a5e93648f9c8cc536"} err="failed to get container status \"d0cfdb61ffc098b6de73008db43b6a747d32880bd56bfd8a5e93648f9c8cc536\": rpc error: code = NotFound desc = could not find container \"d0cfdb61ffc098b6de73008db43b6a747d32880bd56bfd8a5e93648f9c8cc536\": container with ID starting with d0cfdb61ffc098b6de73008db43b6a747d32880bd56bfd8a5e93648f9c8cc536 not found: ID does not exist" Jan 23 17:13:55 crc kubenswrapper[4718]: I0123 17:13:55.153016 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12a65bbd-a659-47e1-a1b8-243c7e1dfdef" path="/var/lib/kubelet/pods/12a65bbd-a659-47e1-a1b8-243c7e1dfdef/volumes" Jan 23 17:14:01 crc kubenswrapper[4718]: I0123 17:14:01.140879 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:14:01 crc kubenswrapper[4718]: I0123 17:14:01.875982 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"1283aded459038c823b90fc7862d305e62f560bd67cf508d721f4d684597b57d"} Jan 23 17:14:52 crc kubenswrapper[4718]: I0123 17:14:52.808878 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xxr9l"] Jan 23 17:14:52 crc kubenswrapper[4718]: E0123 17:14:52.810793 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a65bbd-a659-47e1-a1b8-243c7e1dfdef" containerName="extract-content" Jan 23 17:14:52 crc kubenswrapper[4718]: I0123 17:14:52.810884 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a65bbd-a659-47e1-a1b8-243c7e1dfdef" containerName="extract-content" Jan 23 17:14:52 crc kubenswrapper[4718]: E0123 17:14:52.810955 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a65bbd-a659-47e1-a1b8-243c7e1dfdef" containerName="registry-server" Jan 23 17:14:52 crc kubenswrapper[4718]: I0123 17:14:52.811016 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a65bbd-a659-47e1-a1b8-243c7e1dfdef" containerName="registry-server" Jan 23 17:14:52 crc kubenswrapper[4718]: E0123 17:14:52.811096 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a65bbd-a659-47e1-a1b8-243c7e1dfdef" containerName="extract-utilities" Jan 23 17:14:52 crc kubenswrapper[4718]: I0123 17:14:52.811185 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a65bbd-a659-47e1-a1b8-243c7e1dfdef" containerName="extract-utilities" Jan 23 17:14:52 crc kubenswrapper[4718]: I0123 17:14:52.811536 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="12a65bbd-a659-47e1-a1b8-243c7e1dfdef" containerName="registry-server" Jan 23 17:14:52 crc kubenswrapper[4718]: I0123 17:14:52.813445 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxr9l" Jan 23 17:14:52 crc kubenswrapper[4718]: I0123 17:14:52.821887 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxr9l"] Jan 23 17:14:52 crc kubenswrapper[4718]: I0123 17:14:52.913804 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t6gq\" (UniqueName: \"kubernetes.io/projected/c2384140-5e6e-41bb-b7e9-ecdb7512375c-kube-api-access-5t6gq\") pod \"redhat-marketplace-xxr9l\" (UID: \"c2384140-5e6e-41bb-b7e9-ecdb7512375c\") " pod="openshift-marketplace/redhat-marketplace-xxr9l" Jan 23 17:14:52 crc kubenswrapper[4718]: I0123 17:14:52.913906 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2384140-5e6e-41bb-b7e9-ecdb7512375c-catalog-content\") pod \"redhat-marketplace-xxr9l\" (UID: \"c2384140-5e6e-41bb-b7e9-ecdb7512375c\") " pod="openshift-marketplace/redhat-marketplace-xxr9l" Jan 23 17:14:52 crc kubenswrapper[4718]: I0123 17:14:52.913932 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2384140-5e6e-41bb-b7e9-ecdb7512375c-utilities\") pod \"redhat-marketplace-xxr9l\" (UID: \"c2384140-5e6e-41bb-b7e9-ecdb7512375c\") " pod="openshift-marketplace/redhat-marketplace-xxr9l" Jan 23 17:14:53 crc kubenswrapper[4718]: I0123 17:14:53.017369 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t6gq\" (UniqueName: \"kubernetes.io/projected/c2384140-5e6e-41bb-b7e9-ecdb7512375c-kube-api-access-5t6gq\") pod \"redhat-marketplace-xxr9l\" (UID: \"c2384140-5e6e-41bb-b7e9-ecdb7512375c\") " pod="openshift-marketplace/redhat-marketplace-xxr9l" Jan 23 17:14:53 crc kubenswrapper[4718]: I0123 17:14:53.017758 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2384140-5e6e-41bb-b7e9-ecdb7512375c-catalog-content\") pod \"redhat-marketplace-xxr9l\" (UID: \"c2384140-5e6e-41bb-b7e9-ecdb7512375c\") " pod="openshift-marketplace/redhat-marketplace-xxr9l" Jan 23 17:14:53 crc kubenswrapper[4718]: I0123 17:14:53.017884 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2384140-5e6e-41bb-b7e9-ecdb7512375c-utilities\") pod \"redhat-marketplace-xxr9l\" (UID: \"c2384140-5e6e-41bb-b7e9-ecdb7512375c\") " pod="openshift-marketplace/redhat-marketplace-xxr9l" Jan 23 17:14:53 crc kubenswrapper[4718]: I0123 17:14:53.018680 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2384140-5e6e-41bb-b7e9-ecdb7512375c-utilities\") pod \"redhat-marketplace-xxr9l\" (UID: \"c2384140-5e6e-41bb-b7e9-ecdb7512375c\") " pod="openshift-marketplace/redhat-marketplace-xxr9l" Jan 23 17:14:53 crc kubenswrapper[4718]: I0123 17:14:53.019717 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2384140-5e6e-41bb-b7e9-ecdb7512375c-catalog-content\") pod \"redhat-marketplace-xxr9l\" (UID: \"c2384140-5e6e-41bb-b7e9-ecdb7512375c\") " pod="openshift-marketplace/redhat-marketplace-xxr9l" Jan 23 17:14:53 crc kubenswrapper[4718]: I0123 17:14:53.039103 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t6gq\" (UniqueName: \"kubernetes.io/projected/c2384140-5e6e-41bb-b7e9-ecdb7512375c-kube-api-access-5t6gq\") pod \"redhat-marketplace-xxr9l\" (UID: \"c2384140-5e6e-41bb-b7e9-ecdb7512375c\") " pod="openshift-marketplace/redhat-marketplace-xxr9l" Jan 23 17:14:53 crc kubenswrapper[4718]: I0123 17:14:53.139073 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxr9l" Jan 23 17:14:53 crc kubenswrapper[4718]: I0123 17:14:53.668470 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxr9l"] Jan 23 17:14:54 crc kubenswrapper[4718]: I0123 17:14:54.447622 4718 generic.go:334] "Generic (PLEG): container finished" podID="c2384140-5e6e-41bb-b7e9-ecdb7512375c" containerID="7aa2288cdc2a729b4f4ba3b0161a69b3b43a2fdef9090cadd92f0e6f25e5b34d" exitCode=0 Jan 23 17:14:54 crc kubenswrapper[4718]: I0123 17:14:54.447707 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxr9l" event={"ID":"c2384140-5e6e-41bb-b7e9-ecdb7512375c","Type":"ContainerDied","Data":"7aa2288cdc2a729b4f4ba3b0161a69b3b43a2fdef9090cadd92f0e6f25e5b34d"} Jan 23 17:14:54 crc kubenswrapper[4718]: I0123 17:14:54.448131 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxr9l" event={"ID":"c2384140-5e6e-41bb-b7e9-ecdb7512375c","Type":"ContainerStarted","Data":"efb1209c0a6c6181045c5b3c75ddf416d71368b706457edb104ee5b9bccc26aa"} Jan 23 17:14:55 crc kubenswrapper[4718]: I0123 17:14:55.460979 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxr9l" event={"ID":"c2384140-5e6e-41bb-b7e9-ecdb7512375c","Type":"ContainerStarted","Data":"7405671b3d2213a0468e5c4f272cc6a01b877d4e976a6a7a8ae13e5d735bd803"} Jan 23 17:14:56 crc kubenswrapper[4718]: I0123 17:14:56.482495 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxr9l" event={"ID":"c2384140-5e6e-41bb-b7e9-ecdb7512375c","Type":"ContainerDied","Data":"7405671b3d2213a0468e5c4f272cc6a01b877d4e976a6a7a8ae13e5d735bd803"} Jan 23 17:14:56 crc kubenswrapper[4718]: I0123 17:14:56.484392 4718 generic.go:334] "Generic (PLEG): container finished" podID="c2384140-5e6e-41bb-b7e9-ecdb7512375c" containerID="7405671b3d2213a0468e5c4f272cc6a01b877d4e976a6a7a8ae13e5d735bd803" exitCode=0 Jan 23 17:14:57 crc kubenswrapper[4718]: I0123 17:14:57.498283 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxr9l" event={"ID":"c2384140-5e6e-41bb-b7e9-ecdb7512375c","Type":"ContainerStarted","Data":"ce8d148b86af88e1e3ab2bf73b62584943895ebeb82ae77f52545d1ab16534b3"} Jan 23 17:14:57 crc kubenswrapper[4718]: I0123 17:14:57.518860 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xxr9l" podStartSLOduration=3.061059515 podStartE2EDuration="5.518842209s" podCreationTimestamp="2026-01-23 17:14:52 +0000 UTC" firstStartedPulling="2026-01-23 17:14:54.450399887 +0000 UTC m=+3495.597641868" lastFinishedPulling="2026-01-23 17:14:56.908182571 +0000 UTC m=+3498.055424562" observedRunningTime="2026-01-23 17:14:57.513103213 +0000 UTC m=+3498.660345204" watchObservedRunningTime="2026-01-23 17:14:57.518842209 +0000 UTC m=+3498.666084200" Jan 23 17:15:00 crc kubenswrapper[4718]: I0123 17:15:00.165001 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj"] Jan 23 17:15:00 crc kubenswrapper[4718]: I0123 17:15:00.167414 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj" Jan 23 17:15:00 crc kubenswrapper[4718]: I0123 17:15:00.169649 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 17:15:00 crc kubenswrapper[4718]: I0123 17:15:00.170264 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 17:15:00 crc kubenswrapper[4718]: I0123 17:15:00.180961 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj"] Jan 23 17:15:00 crc kubenswrapper[4718]: I0123 17:15:00.300665 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cd64bef-5c37-47b0-9777-0e94578f7f35-secret-volume\") pod \"collect-profiles-29486475-8clmj\" (UID: \"8cd64bef-5c37-47b0-9777-0e94578f7f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj" Jan 23 17:15:00 crc kubenswrapper[4718]: I0123 17:15:00.301103 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr4kj\" (UniqueName: \"kubernetes.io/projected/8cd64bef-5c37-47b0-9777-0e94578f7f35-kube-api-access-hr4kj\") pod \"collect-profiles-29486475-8clmj\" (UID: \"8cd64bef-5c37-47b0-9777-0e94578f7f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj" Jan 23 17:15:00 crc kubenswrapper[4718]: I0123 17:15:00.301230 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cd64bef-5c37-47b0-9777-0e94578f7f35-config-volume\") pod \"collect-profiles-29486475-8clmj\" (UID: \"8cd64bef-5c37-47b0-9777-0e94578f7f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj" Jan 23 17:15:00 crc kubenswrapper[4718]: I0123 17:15:00.403876 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cd64bef-5c37-47b0-9777-0e94578f7f35-secret-volume\") pod \"collect-profiles-29486475-8clmj\" (UID: \"8cd64bef-5c37-47b0-9777-0e94578f7f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj" Jan 23 17:15:00 crc kubenswrapper[4718]: I0123 17:15:00.403935 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr4kj\" (UniqueName: \"kubernetes.io/projected/8cd64bef-5c37-47b0-9777-0e94578f7f35-kube-api-access-hr4kj\") pod \"collect-profiles-29486475-8clmj\" (UID: \"8cd64bef-5c37-47b0-9777-0e94578f7f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj" Jan 23 17:15:00 crc kubenswrapper[4718]: I0123 17:15:00.403964 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cd64bef-5c37-47b0-9777-0e94578f7f35-config-volume\") pod \"collect-profiles-29486475-8clmj\" (UID: \"8cd64bef-5c37-47b0-9777-0e94578f7f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj" Jan 23 17:15:00 crc kubenswrapper[4718]: I0123 17:15:00.404726 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cd64bef-5c37-47b0-9777-0e94578f7f35-config-volume\") pod \"collect-profiles-29486475-8clmj\" (UID: \"8cd64bef-5c37-47b0-9777-0e94578f7f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj" Jan 23 17:15:00 crc kubenswrapper[4718]: I0123 17:15:00.409380 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cd64bef-5c37-47b0-9777-0e94578f7f35-secret-volume\") pod \"collect-profiles-29486475-8clmj\" (UID: \"8cd64bef-5c37-47b0-9777-0e94578f7f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj" Jan 23 17:15:00 crc kubenswrapper[4718]: I0123 17:15:00.421565 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr4kj\" (UniqueName: \"kubernetes.io/projected/8cd64bef-5c37-47b0-9777-0e94578f7f35-kube-api-access-hr4kj\") pod \"collect-profiles-29486475-8clmj\" (UID: \"8cd64bef-5c37-47b0-9777-0e94578f7f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj" Jan 23 17:15:00 crc kubenswrapper[4718]: I0123 17:15:00.496237 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj" Jan 23 17:15:01 crc kubenswrapper[4718]: I0123 17:15:01.001433 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj"] Jan 23 17:15:01 crc kubenswrapper[4718]: W0123 17:15:01.002689 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cd64bef_5c37_47b0_9777_0e94578f7f35.slice/crio-c1050a22741ef78c940f7fd0c6db41a12bdf1863343414063df282feeb957add WatchSource:0}: Error finding container c1050a22741ef78c940f7fd0c6db41a12bdf1863343414063df282feeb957add: Status 404 returned error can't find the container with id c1050a22741ef78c940f7fd0c6db41a12bdf1863343414063df282feeb957add Jan 23 17:15:01 crc kubenswrapper[4718]: I0123 17:15:01.542738 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj" event={"ID":"8cd64bef-5c37-47b0-9777-0e94578f7f35","Type":"ContainerStarted","Data":"521ea7467cdea2f006fcd603365ab000659809e990042087dfaf6ba620e2a023"} Jan 23 17:15:01 crc kubenswrapper[4718]: I0123 17:15:01.543073 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj" event={"ID":"8cd64bef-5c37-47b0-9777-0e94578f7f35","Type":"ContainerStarted","Data":"c1050a22741ef78c940f7fd0c6db41a12bdf1863343414063df282feeb957add"} Jan 23 17:15:01 crc kubenswrapper[4718]: I0123 17:15:01.565921 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj" podStartSLOduration=1.5659009350000002 podStartE2EDuration="1.565900935s" podCreationTimestamp="2026-01-23 17:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:15:01.558145755 +0000 UTC m=+3502.705387746" watchObservedRunningTime="2026-01-23 17:15:01.565900935 +0000 UTC m=+3502.713142926" Jan 23 17:15:02 crc kubenswrapper[4718]: I0123 17:15:02.554676 4718 generic.go:334] "Generic (PLEG): container finished" podID="8cd64bef-5c37-47b0-9777-0e94578f7f35" containerID="521ea7467cdea2f006fcd603365ab000659809e990042087dfaf6ba620e2a023" exitCode=0 Jan 23 17:15:02 crc kubenswrapper[4718]: I0123 17:15:02.554767 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj" event={"ID":"8cd64bef-5c37-47b0-9777-0e94578f7f35","Type":"ContainerDied","Data":"521ea7467cdea2f006fcd603365ab000659809e990042087dfaf6ba620e2a023"} Jan 23 17:15:03 crc kubenswrapper[4718]: I0123 17:15:03.140076 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xxr9l" Jan 23 17:15:03 crc kubenswrapper[4718]: I0123 17:15:03.155171 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xxr9l" Jan 23 17:15:03 crc kubenswrapper[4718]: I0123 17:15:03.195081 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xxr9l" Jan 23 17:15:03 crc kubenswrapper[4718]: I0123 17:15:03.643810 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xxr9l" Jan 23 17:15:04 crc kubenswrapper[4718]: I0123 17:15:04.012832 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj" Jan 23 17:15:04 crc kubenswrapper[4718]: I0123 17:15:04.091069 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr4kj\" (UniqueName: \"kubernetes.io/projected/8cd64bef-5c37-47b0-9777-0e94578f7f35-kube-api-access-hr4kj\") pod \"8cd64bef-5c37-47b0-9777-0e94578f7f35\" (UID: \"8cd64bef-5c37-47b0-9777-0e94578f7f35\") " Jan 23 17:15:04 crc kubenswrapper[4718]: I0123 17:15:04.091224 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cd64bef-5c37-47b0-9777-0e94578f7f35-secret-volume\") pod \"8cd64bef-5c37-47b0-9777-0e94578f7f35\" (UID: \"8cd64bef-5c37-47b0-9777-0e94578f7f35\") " Jan 23 17:15:04 crc kubenswrapper[4718]: I0123 17:15:04.091397 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cd64bef-5c37-47b0-9777-0e94578f7f35-config-volume\") pod \"8cd64bef-5c37-47b0-9777-0e94578f7f35\" (UID: \"8cd64bef-5c37-47b0-9777-0e94578f7f35\") " Jan 23 17:15:04 crc kubenswrapper[4718]: I0123 17:15:04.092249 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cd64bef-5c37-47b0-9777-0e94578f7f35-config-volume" (OuterVolumeSpecName: "config-volume") pod "8cd64bef-5c37-47b0-9777-0e94578f7f35" (UID: "8cd64bef-5c37-47b0-9777-0e94578f7f35"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:15:04 crc kubenswrapper[4718]: I0123 17:15:04.102842 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cd64bef-5c37-47b0-9777-0e94578f7f35-kube-api-access-hr4kj" (OuterVolumeSpecName: "kube-api-access-hr4kj") pod "8cd64bef-5c37-47b0-9777-0e94578f7f35" (UID: "8cd64bef-5c37-47b0-9777-0e94578f7f35"). InnerVolumeSpecName "kube-api-access-hr4kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:15:04 crc kubenswrapper[4718]: I0123 17:15:04.105786 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cd64bef-5c37-47b0-9777-0e94578f7f35-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8cd64bef-5c37-47b0-9777-0e94578f7f35" (UID: "8cd64bef-5c37-47b0-9777-0e94578f7f35"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:15:04 crc kubenswrapper[4718]: I0123 17:15:04.201851 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cd64bef-5c37-47b0-9777-0e94578f7f35-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 17:15:04 crc kubenswrapper[4718]: I0123 17:15:04.201894 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cd64bef-5c37-47b0-9777-0e94578f7f35-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 17:15:04 crc kubenswrapper[4718]: I0123 17:15:04.201906 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr4kj\" (UniqueName: \"kubernetes.io/projected/8cd64bef-5c37-47b0-9777-0e94578f7f35-kube-api-access-hr4kj\") on node \"crc\" DevicePath \"\"" Jan 23 17:15:04 crc kubenswrapper[4718]: I0123 17:15:04.583085 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj" Jan 23 17:15:04 crc kubenswrapper[4718]: I0123 17:15:04.583387 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj" event={"ID":"8cd64bef-5c37-47b0-9777-0e94578f7f35","Type":"ContainerDied","Data":"c1050a22741ef78c940f7fd0c6db41a12bdf1863343414063df282feeb957add"} Jan 23 17:15:04 crc kubenswrapper[4718]: I0123 17:15:04.584432 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1050a22741ef78c940f7fd0c6db41a12bdf1863343414063df282feeb957add" Jan 23 17:15:04 crc kubenswrapper[4718]: I0123 17:15:04.658286 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4"] Jan 23 17:15:04 crc kubenswrapper[4718]: I0123 17:15:04.668931 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486430-k48z4"] Jan 23 17:15:05 crc kubenswrapper[4718]: I0123 17:15:05.154262 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="659d9e7c-d96f-4e98-b3b2-2c99f81d25c8" path="/var/lib/kubelet/pods/659d9e7c-d96f-4e98-b3b2-2c99f81d25c8/volumes" Jan 23 17:15:06 crc kubenswrapper[4718]: I0123 17:15:06.796745 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxr9l"] Jan 23 17:15:06 crc kubenswrapper[4718]: I0123 17:15:06.797475 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xxr9l" podUID="c2384140-5e6e-41bb-b7e9-ecdb7512375c" containerName="registry-server" containerID="cri-o://ce8d148b86af88e1e3ab2bf73b62584943895ebeb82ae77f52545d1ab16534b3" gracePeriod=2 Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.330568 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxr9l" Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.478896 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2384140-5e6e-41bb-b7e9-ecdb7512375c-catalog-content\") pod \"c2384140-5e6e-41bb-b7e9-ecdb7512375c\" (UID: \"c2384140-5e6e-41bb-b7e9-ecdb7512375c\") " Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.479490 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2384140-5e6e-41bb-b7e9-ecdb7512375c-utilities\") pod \"c2384140-5e6e-41bb-b7e9-ecdb7512375c\" (UID: \"c2384140-5e6e-41bb-b7e9-ecdb7512375c\") " Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.479555 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t6gq\" (UniqueName: \"kubernetes.io/projected/c2384140-5e6e-41bb-b7e9-ecdb7512375c-kube-api-access-5t6gq\") pod \"c2384140-5e6e-41bb-b7e9-ecdb7512375c\" (UID: \"c2384140-5e6e-41bb-b7e9-ecdb7512375c\") " Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.481501 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2384140-5e6e-41bb-b7e9-ecdb7512375c-utilities" (OuterVolumeSpecName: "utilities") pod "c2384140-5e6e-41bb-b7e9-ecdb7512375c" (UID: "c2384140-5e6e-41bb-b7e9-ecdb7512375c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.486184 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2384140-5e6e-41bb-b7e9-ecdb7512375c-kube-api-access-5t6gq" (OuterVolumeSpecName: "kube-api-access-5t6gq") pod "c2384140-5e6e-41bb-b7e9-ecdb7512375c" (UID: "c2384140-5e6e-41bb-b7e9-ecdb7512375c"). InnerVolumeSpecName "kube-api-access-5t6gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.503361 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2384140-5e6e-41bb-b7e9-ecdb7512375c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2384140-5e6e-41bb-b7e9-ecdb7512375c" (UID: "c2384140-5e6e-41bb-b7e9-ecdb7512375c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.583029 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2384140-5e6e-41bb-b7e9-ecdb7512375c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.583089 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t6gq\" (UniqueName: \"kubernetes.io/projected/c2384140-5e6e-41bb-b7e9-ecdb7512375c-kube-api-access-5t6gq\") on node \"crc\" DevicePath \"\"" Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.583101 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2384140-5e6e-41bb-b7e9-ecdb7512375c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.613557 4718 generic.go:334] "Generic (PLEG): container finished" podID="c2384140-5e6e-41bb-b7e9-ecdb7512375c" containerID="ce8d148b86af88e1e3ab2bf73b62584943895ebeb82ae77f52545d1ab16534b3" exitCode=0 Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.613597 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxr9l" event={"ID":"c2384140-5e6e-41bb-b7e9-ecdb7512375c","Type":"ContainerDied","Data":"ce8d148b86af88e1e3ab2bf73b62584943895ebeb82ae77f52545d1ab16534b3"} Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.613664 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxr9l" event={"ID":"c2384140-5e6e-41bb-b7e9-ecdb7512375c","Type":"ContainerDied","Data":"efb1209c0a6c6181045c5b3c75ddf416d71368b706457edb104ee5b9bccc26aa"} Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.613675 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxr9l" Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.613685 4718 scope.go:117] "RemoveContainer" containerID="ce8d148b86af88e1e3ab2bf73b62584943895ebeb82ae77f52545d1ab16534b3" Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.643077 4718 scope.go:117] "RemoveContainer" containerID="7405671b3d2213a0468e5c4f272cc6a01b877d4e976a6a7a8ae13e5d735bd803" Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.648211 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxr9l"] Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.658821 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxr9l"] Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.662427 4718 scope.go:117] "RemoveContainer" containerID="7aa2288cdc2a729b4f4ba3b0161a69b3b43a2fdef9090cadd92f0e6f25e5b34d" Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.724297 4718 scope.go:117] "RemoveContainer" containerID="ce8d148b86af88e1e3ab2bf73b62584943895ebeb82ae77f52545d1ab16534b3" Jan 23 17:15:07 crc kubenswrapper[4718]: E0123 17:15:07.724969 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce8d148b86af88e1e3ab2bf73b62584943895ebeb82ae77f52545d1ab16534b3\": container with ID starting with ce8d148b86af88e1e3ab2bf73b62584943895ebeb82ae77f52545d1ab16534b3 not found: ID does not exist" containerID="ce8d148b86af88e1e3ab2bf73b62584943895ebeb82ae77f52545d1ab16534b3" Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.725013 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce8d148b86af88e1e3ab2bf73b62584943895ebeb82ae77f52545d1ab16534b3"} err="failed to get container status \"ce8d148b86af88e1e3ab2bf73b62584943895ebeb82ae77f52545d1ab16534b3\": rpc error: code = NotFound desc = could not find container \"ce8d148b86af88e1e3ab2bf73b62584943895ebeb82ae77f52545d1ab16534b3\": container with ID starting with ce8d148b86af88e1e3ab2bf73b62584943895ebeb82ae77f52545d1ab16534b3 not found: ID does not exist" Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.725039 4718 scope.go:117] "RemoveContainer" containerID="7405671b3d2213a0468e5c4f272cc6a01b877d4e976a6a7a8ae13e5d735bd803" Jan 23 17:15:07 crc kubenswrapper[4718]: E0123 17:15:07.725355 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7405671b3d2213a0468e5c4f272cc6a01b877d4e976a6a7a8ae13e5d735bd803\": container with ID starting with 7405671b3d2213a0468e5c4f272cc6a01b877d4e976a6a7a8ae13e5d735bd803 not found: ID does not exist" containerID="7405671b3d2213a0468e5c4f272cc6a01b877d4e976a6a7a8ae13e5d735bd803" Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.725393 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7405671b3d2213a0468e5c4f272cc6a01b877d4e976a6a7a8ae13e5d735bd803"} err="failed to get container status \"7405671b3d2213a0468e5c4f272cc6a01b877d4e976a6a7a8ae13e5d735bd803\": rpc error: code = NotFound desc = could not find container \"7405671b3d2213a0468e5c4f272cc6a01b877d4e976a6a7a8ae13e5d735bd803\": container with ID starting with 7405671b3d2213a0468e5c4f272cc6a01b877d4e976a6a7a8ae13e5d735bd803 not found: ID does not exist" Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.725423 4718 scope.go:117] "RemoveContainer" containerID="7aa2288cdc2a729b4f4ba3b0161a69b3b43a2fdef9090cadd92f0e6f25e5b34d" Jan 23 17:15:07 crc kubenswrapper[4718]: E0123 17:15:07.725793 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aa2288cdc2a729b4f4ba3b0161a69b3b43a2fdef9090cadd92f0e6f25e5b34d\": container with ID starting with 7aa2288cdc2a729b4f4ba3b0161a69b3b43a2fdef9090cadd92f0e6f25e5b34d not found: ID does not exist" containerID="7aa2288cdc2a729b4f4ba3b0161a69b3b43a2fdef9090cadd92f0e6f25e5b34d" Jan 23 17:15:07 crc kubenswrapper[4718]: I0123 17:15:07.725828 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aa2288cdc2a729b4f4ba3b0161a69b3b43a2fdef9090cadd92f0e6f25e5b34d"} err="failed to get container status \"7aa2288cdc2a729b4f4ba3b0161a69b3b43a2fdef9090cadd92f0e6f25e5b34d\": rpc error: code = NotFound desc = could not find container \"7aa2288cdc2a729b4f4ba3b0161a69b3b43a2fdef9090cadd92f0e6f25e5b34d\": container with ID starting with 7aa2288cdc2a729b4f4ba3b0161a69b3b43a2fdef9090cadd92f0e6f25e5b34d not found: ID does not exist" Jan 23 17:15:09 crc kubenswrapper[4718]: I0123 17:15:09.177605 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2384140-5e6e-41bb-b7e9-ecdb7512375c" path="/var/lib/kubelet/pods/c2384140-5e6e-41bb-b7e9-ecdb7512375c/volumes" Jan 23 17:16:02 crc kubenswrapper[4718]: I0123 17:16:02.621114 4718 scope.go:117] "RemoveContainer" containerID="fbee9811e68b49242de9623807874fecd620cb8ef8425662623893402cd997bb" Jan 23 17:16:28 crc kubenswrapper[4718]: I0123 17:16:28.875990 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:16:28 crc kubenswrapper[4718]: I0123 17:16:28.876507 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:16:58 crc kubenswrapper[4718]: I0123 17:16:58.875505 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:16:58 crc kubenswrapper[4718]: I0123 17:16:58.876067 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:17:28 crc kubenswrapper[4718]: I0123 17:17:28.876078 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:17:28 crc kubenswrapper[4718]: I0123 17:17:28.876662 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:17:28 crc kubenswrapper[4718]: I0123 17:17:28.876727 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 17:17:28 crc kubenswrapper[4718]: I0123 17:17:28.877464 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1283aded459038c823b90fc7862d305e62f560bd67cf508d721f4d684597b57d"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:17:28 crc kubenswrapper[4718]: I0123 17:17:28.877517 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://1283aded459038c823b90fc7862d305e62f560bd67cf508d721f4d684597b57d" gracePeriod=600 Jan 23 17:17:29 crc kubenswrapper[4718]: I0123 17:17:29.175767 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"1283aded459038c823b90fc7862d305e62f560bd67cf508d721f4d684597b57d"} Jan 23 17:17:29 crc kubenswrapper[4718]: I0123 17:17:29.176423 4718 scope.go:117] "RemoveContainer" containerID="80f4f6300638ace60fecb5266739f4cab4fef549f750c7e725007f8fc8ded20b" Jan 23 17:17:29 crc kubenswrapper[4718]: I0123 17:17:29.175897 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="1283aded459038c823b90fc7862d305e62f560bd67cf508d721f4d684597b57d" exitCode=0 Jan 23 17:17:30 crc kubenswrapper[4718]: I0123 17:17:30.188526 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc"} Jan 23 17:19:58 crc kubenswrapper[4718]: I0123 17:19:58.876151 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:19:58 crc kubenswrapper[4718]: I0123 17:19:58.876760 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:20:28 crc kubenswrapper[4718]: I0123 17:20:28.876050 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:20:28 crc kubenswrapper[4718]: I0123 17:20:28.877289 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:20:58 crc kubenswrapper[4718]: I0123 17:20:58.875802 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:20:58 crc kubenswrapper[4718]: I0123 17:20:58.876435 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:20:58 crc kubenswrapper[4718]: I0123 17:20:58.876483 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 17:20:58 crc kubenswrapper[4718]: I0123 17:20:58.878050 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:20:58 crc kubenswrapper[4718]: I0123 17:20:58.878136 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" gracePeriod=600 Jan 23 17:20:59 crc kubenswrapper[4718]: E0123 17:20:59.000313 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:20:59 crc kubenswrapper[4718]: I0123 17:20:59.383617 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" exitCode=0 Jan 23 17:20:59 crc kubenswrapper[4718]: I0123 17:20:59.384201 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc"} Jan 23 17:20:59 crc kubenswrapper[4718]: I0123 17:20:59.384739 4718 scope.go:117] "RemoveContainer" containerID="1283aded459038c823b90fc7862d305e62f560bd67cf508d721f4d684597b57d" Jan 23 17:20:59 crc kubenswrapper[4718]: I0123 17:20:59.385809 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:20:59 crc kubenswrapper[4718]: E0123 17:20:59.386382 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:21:13 crc kubenswrapper[4718]: I0123 17:21:13.140840 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:21:13 crc kubenswrapper[4718]: E0123 17:21:13.141661 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:21:18 crc kubenswrapper[4718]: I0123 17:21:18.752076 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg" podUID="9e8950bc-8213-40eb-9bb7-2e1a8c66b57b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:21:18 crc kubenswrapper[4718]: I0123 17:21:18.755621 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw" podUID="d869ec7c-ddd9-4e17-9154-a793539a2a00" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:21:24 crc kubenswrapper[4718]: I0123 17:21:24.140647 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:21:24 crc kubenswrapper[4718]: E0123 17:21:24.142276 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:21:37 crc kubenswrapper[4718]: I0123 17:21:37.140750 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:21:37 crc kubenswrapper[4718]: E0123 17:21:37.141405 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:21:49 crc kubenswrapper[4718]: I0123 17:21:49.141701 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:21:49 crc kubenswrapper[4718]: E0123 17:21:49.142462 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:22:02 crc kubenswrapper[4718]: I0123 17:22:02.141261 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:22:02 crc kubenswrapper[4718]: E0123 17:22:02.142247 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:22:14 crc kubenswrapper[4718]: I0123 17:22:14.141002 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:22:14 crc kubenswrapper[4718]: E0123 17:22:14.142454 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:22:26 crc kubenswrapper[4718]: I0123 17:22:26.141871 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:22:26 crc kubenswrapper[4718]: E0123 17:22:26.142959 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:22:38 crc kubenswrapper[4718]: I0123 17:22:38.140380 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:22:38 crc kubenswrapper[4718]: E0123 17:22:38.141345 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:22:49 crc kubenswrapper[4718]: I0123 17:22:49.147957 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:22:49 crc kubenswrapper[4718]: E0123 17:22:49.148837 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:23:02 crc kubenswrapper[4718]: I0123 17:23:02.140682 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:23:02 crc kubenswrapper[4718]: E0123 17:23:02.142201 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:23:14 crc kubenswrapper[4718]: I0123 17:23:14.140658 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:23:14 crc kubenswrapper[4718]: E0123 17:23:14.141446 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:23:26 crc kubenswrapper[4718]: I0123 17:23:26.140546 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:23:26 crc kubenswrapper[4718]: E0123 17:23:26.141393 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:23:35 crc kubenswrapper[4718]: I0123 17:23:35.992864 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jhsfc"] Jan 23 17:23:35 crc kubenswrapper[4718]: E0123 17:23:35.994102 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2384140-5e6e-41bb-b7e9-ecdb7512375c" containerName="extract-content" Jan 23 17:23:35 crc kubenswrapper[4718]: I0123 17:23:35.994142 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2384140-5e6e-41bb-b7e9-ecdb7512375c" containerName="extract-content" Jan 23 17:23:35 crc kubenswrapper[4718]: E0123 17:23:35.994187 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd64bef-5c37-47b0-9777-0e94578f7f35" containerName="collect-profiles" Jan 23 17:23:35 crc kubenswrapper[4718]: I0123 17:23:35.994195 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd64bef-5c37-47b0-9777-0e94578f7f35" containerName="collect-profiles" Jan 23 17:23:35 crc kubenswrapper[4718]: E0123 17:23:35.994213 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2384140-5e6e-41bb-b7e9-ecdb7512375c" containerName="extract-utilities" Jan 23 17:23:35 crc kubenswrapper[4718]: I0123 17:23:35.994222 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2384140-5e6e-41bb-b7e9-ecdb7512375c" containerName="extract-utilities" Jan 23 17:23:35 crc kubenswrapper[4718]: E0123 17:23:35.994243 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2384140-5e6e-41bb-b7e9-ecdb7512375c" containerName="registry-server" Jan 23 17:23:35 crc kubenswrapper[4718]: I0123 17:23:35.994251 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2384140-5e6e-41bb-b7e9-ecdb7512375c" containerName="registry-server" Jan 23 17:23:35 crc kubenswrapper[4718]: I0123 17:23:35.994532 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cd64bef-5c37-47b0-9777-0e94578f7f35" containerName="collect-profiles" Jan 23 17:23:35 crc kubenswrapper[4718]: I0123 17:23:35.994574 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2384140-5e6e-41bb-b7e9-ecdb7512375c" containerName="registry-server" Jan 23 17:23:35 crc kubenswrapper[4718]: I0123 17:23:35.996770 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhsfc" Jan 23 17:23:36 crc kubenswrapper[4718]: I0123 17:23:36.019303 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jhsfc"] Jan 23 17:23:36 crc kubenswrapper[4718]: I0123 17:23:36.085151 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmrvb\" (UniqueName: \"kubernetes.io/projected/188ebe99-20ae-455a-babe-62f0df57972d-kube-api-access-mmrvb\") pod \"community-operators-jhsfc\" (UID: \"188ebe99-20ae-455a-babe-62f0df57972d\") " pod="openshift-marketplace/community-operators-jhsfc" Jan 23 17:23:36 crc kubenswrapper[4718]: I0123 17:23:36.085402 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/188ebe99-20ae-455a-babe-62f0df57972d-utilities\") pod \"community-operators-jhsfc\" (UID: \"188ebe99-20ae-455a-babe-62f0df57972d\") " pod="openshift-marketplace/community-operators-jhsfc" Jan 23 17:23:36 crc kubenswrapper[4718]: I0123 17:23:36.085606 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/188ebe99-20ae-455a-babe-62f0df57972d-catalog-content\") pod \"community-operators-jhsfc\" (UID: \"188ebe99-20ae-455a-babe-62f0df57972d\") " pod="openshift-marketplace/community-operators-jhsfc" Jan 23 17:23:36 crc kubenswrapper[4718]: I0123 17:23:36.188260 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/188ebe99-20ae-455a-babe-62f0df57972d-utilities\") pod \"community-operators-jhsfc\" (UID: \"188ebe99-20ae-455a-babe-62f0df57972d\") " pod="openshift-marketplace/community-operators-jhsfc" Jan 23 17:23:36 crc kubenswrapper[4718]: I0123 17:23:36.188378 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/188ebe99-20ae-455a-babe-62f0df57972d-catalog-content\") pod \"community-operators-jhsfc\" (UID: \"188ebe99-20ae-455a-babe-62f0df57972d\") " pod="openshift-marketplace/community-operators-jhsfc" Jan 23 17:23:36 crc kubenswrapper[4718]: I0123 17:23:36.188526 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmrvb\" (UniqueName: \"kubernetes.io/projected/188ebe99-20ae-455a-babe-62f0df57972d-kube-api-access-mmrvb\") pod \"community-operators-jhsfc\" (UID: \"188ebe99-20ae-455a-babe-62f0df57972d\") " pod="openshift-marketplace/community-operators-jhsfc" Jan 23 17:23:36 crc kubenswrapper[4718]: I0123 17:23:36.188860 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/188ebe99-20ae-455a-babe-62f0df57972d-utilities\") pod \"community-operators-jhsfc\" (UID: \"188ebe99-20ae-455a-babe-62f0df57972d\") " pod="openshift-marketplace/community-operators-jhsfc" Jan 23 17:23:36 crc kubenswrapper[4718]: I0123 17:23:36.188875 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/188ebe99-20ae-455a-babe-62f0df57972d-catalog-content\") pod \"community-operators-jhsfc\" (UID: \"188ebe99-20ae-455a-babe-62f0df57972d\") " pod="openshift-marketplace/community-operators-jhsfc" Jan 23 17:23:36 crc kubenswrapper[4718]: I0123 17:23:36.211185 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmrvb\" (UniqueName: \"kubernetes.io/projected/188ebe99-20ae-455a-babe-62f0df57972d-kube-api-access-mmrvb\") pod \"community-operators-jhsfc\" (UID: \"188ebe99-20ae-455a-babe-62f0df57972d\") " pod="openshift-marketplace/community-operators-jhsfc" Jan 23 17:23:36 crc kubenswrapper[4718]: I0123 17:23:36.334500 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhsfc" Jan 23 17:23:36 crc kubenswrapper[4718]: I0123 17:23:36.883975 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jhsfc"] Jan 23 17:23:37 crc kubenswrapper[4718]: I0123 17:23:37.141101 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:23:37 crc kubenswrapper[4718]: E0123 17:23:37.141736 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:23:37 crc kubenswrapper[4718]: I0123 17:23:37.310691 4718 generic.go:334] "Generic (PLEG): container finished" podID="188ebe99-20ae-455a-babe-62f0df57972d" containerID="c977523d7d04f60b85959d123cde7089f568ef0a03a98df0a1c7501b817c2f43" exitCode=0 Jan 23 17:23:37 crc kubenswrapper[4718]: I0123 17:23:37.310732 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhsfc" event={"ID":"188ebe99-20ae-455a-babe-62f0df57972d","Type":"ContainerDied","Data":"c977523d7d04f60b85959d123cde7089f568ef0a03a98df0a1c7501b817c2f43"} Jan 23 17:23:37 crc kubenswrapper[4718]: I0123 17:23:37.310757 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhsfc" event={"ID":"188ebe99-20ae-455a-babe-62f0df57972d","Type":"ContainerStarted","Data":"b53f0f753a6745dc0750cfbfb4b18fb984978a3e4d1c06c2b91a56bddb57c569"} Jan 23 17:23:37 crc kubenswrapper[4718]: I0123 17:23:37.313404 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:23:39 crc kubenswrapper[4718]: I0123 17:23:39.332293 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhsfc" event={"ID":"188ebe99-20ae-455a-babe-62f0df57972d","Type":"ContainerStarted","Data":"21ea77a50b310357b182d589423d92c33176a8df3730084d7749fee184e9bab9"} Jan 23 17:23:40 crc kubenswrapper[4718]: I0123 17:23:40.348551 4718 generic.go:334] "Generic (PLEG): container finished" podID="188ebe99-20ae-455a-babe-62f0df57972d" containerID="21ea77a50b310357b182d589423d92c33176a8df3730084d7749fee184e9bab9" exitCode=0 Jan 23 17:23:40 crc kubenswrapper[4718]: I0123 17:23:40.348938 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhsfc" event={"ID":"188ebe99-20ae-455a-babe-62f0df57972d","Type":"ContainerDied","Data":"21ea77a50b310357b182d589423d92c33176a8df3730084d7749fee184e9bab9"} Jan 23 17:23:41 crc kubenswrapper[4718]: I0123 17:23:41.371554 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhsfc" event={"ID":"188ebe99-20ae-455a-babe-62f0df57972d","Type":"ContainerStarted","Data":"89d71bd16e6ed422ae4ecf01000daede304890cff4e6edd2c1dc7a1bd8d6a413"} Jan 23 17:23:41 crc kubenswrapper[4718]: I0123 17:23:41.403764 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jhsfc" podStartSLOduration=2.9711643370000003 podStartE2EDuration="6.403743112s" podCreationTimestamp="2026-01-23 17:23:35 +0000 UTC" firstStartedPulling="2026-01-23 17:23:37.312918485 +0000 UTC m=+4018.460160506" lastFinishedPulling="2026-01-23 17:23:40.74549729 +0000 UTC m=+4021.892739281" observedRunningTime="2026-01-23 17:23:41.393910214 +0000 UTC m=+4022.541152215" watchObservedRunningTime="2026-01-23 17:23:41.403743112 +0000 UTC m=+4022.550985103" Jan 23 17:23:46 crc kubenswrapper[4718]: I0123 17:23:46.335062 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jhsfc" Jan 23 17:23:46 crc kubenswrapper[4718]: I0123 17:23:46.335626 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jhsfc" Jan 23 17:23:46 crc kubenswrapper[4718]: I0123 17:23:46.412473 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jhsfc" Jan 23 17:23:46 crc kubenswrapper[4718]: I0123 17:23:46.476796 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jhsfc" Jan 23 17:23:46 crc kubenswrapper[4718]: I0123 17:23:46.652202 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jhsfc"] Jan 23 17:23:48 crc kubenswrapper[4718]: I0123 17:23:48.141064 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:23:48 crc kubenswrapper[4718]: E0123 17:23:48.141690 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:23:48 crc kubenswrapper[4718]: I0123 17:23:48.445873 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jhsfc" podUID="188ebe99-20ae-455a-babe-62f0df57972d" containerName="registry-server" containerID="cri-o://89d71bd16e6ed422ae4ecf01000daede304890cff4e6edd2c1dc7a1bd8d6a413" gracePeriod=2 Jan 23 17:23:48 crc kubenswrapper[4718]: I0123 17:23:48.951053 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhsfc" Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.014406 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmrvb\" (UniqueName: \"kubernetes.io/projected/188ebe99-20ae-455a-babe-62f0df57972d-kube-api-access-mmrvb\") pod \"188ebe99-20ae-455a-babe-62f0df57972d\" (UID: \"188ebe99-20ae-455a-babe-62f0df57972d\") " Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.014476 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/188ebe99-20ae-455a-babe-62f0df57972d-catalog-content\") pod \"188ebe99-20ae-455a-babe-62f0df57972d\" (UID: \"188ebe99-20ae-455a-babe-62f0df57972d\") " Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.014613 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/188ebe99-20ae-455a-babe-62f0df57972d-utilities\") pod \"188ebe99-20ae-455a-babe-62f0df57972d\" (UID: \"188ebe99-20ae-455a-babe-62f0df57972d\") " Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.015989 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/188ebe99-20ae-455a-babe-62f0df57972d-utilities" (OuterVolumeSpecName: "utilities") pod "188ebe99-20ae-455a-babe-62f0df57972d" (UID: "188ebe99-20ae-455a-babe-62f0df57972d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.023066 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/188ebe99-20ae-455a-babe-62f0df57972d-kube-api-access-mmrvb" (OuterVolumeSpecName: "kube-api-access-mmrvb") pod "188ebe99-20ae-455a-babe-62f0df57972d" (UID: "188ebe99-20ae-455a-babe-62f0df57972d"). InnerVolumeSpecName "kube-api-access-mmrvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.093441 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/188ebe99-20ae-455a-babe-62f0df57972d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "188ebe99-20ae-455a-babe-62f0df57972d" (UID: "188ebe99-20ae-455a-babe-62f0df57972d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.121702 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmrvb\" (UniqueName: \"kubernetes.io/projected/188ebe99-20ae-455a-babe-62f0df57972d-kube-api-access-mmrvb\") on node \"crc\" DevicePath \"\"" Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.121752 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/188ebe99-20ae-455a-babe-62f0df57972d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.121761 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/188ebe99-20ae-455a-babe-62f0df57972d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.459735 4718 generic.go:334] "Generic (PLEG): container finished" podID="188ebe99-20ae-455a-babe-62f0df57972d" containerID="89d71bd16e6ed422ae4ecf01000daede304890cff4e6edd2c1dc7a1bd8d6a413" exitCode=0 Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.459868 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhsfc" event={"ID":"188ebe99-20ae-455a-babe-62f0df57972d","Type":"ContainerDied","Data":"89d71bd16e6ed422ae4ecf01000daede304890cff4e6edd2c1dc7a1bd8d6a413"} Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.460052 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhsfc" event={"ID":"188ebe99-20ae-455a-babe-62f0df57972d","Type":"ContainerDied","Data":"b53f0f753a6745dc0750cfbfb4b18fb984978a3e4d1c06c2b91a56bddb57c569"} Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.459945 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhsfc" Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.460079 4718 scope.go:117] "RemoveContainer" containerID="89d71bd16e6ed422ae4ecf01000daede304890cff4e6edd2c1dc7a1bd8d6a413" Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.486282 4718 scope.go:117] "RemoveContainer" containerID="21ea77a50b310357b182d589423d92c33176a8df3730084d7749fee184e9bab9" Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.487746 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jhsfc"] Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.503766 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jhsfc"] Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.513230 4718 scope.go:117] "RemoveContainer" containerID="c977523d7d04f60b85959d123cde7089f568ef0a03a98df0a1c7501b817c2f43" Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.578899 4718 scope.go:117] "RemoveContainer" containerID="89d71bd16e6ed422ae4ecf01000daede304890cff4e6edd2c1dc7a1bd8d6a413" Jan 23 17:23:49 crc kubenswrapper[4718]: E0123 17:23:49.582119 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89d71bd16e6ed422ae4ecf01000daede304890cff4e6edd2c1dc7a1bd8d6a413\": container with ID starting with 89d71bd16e6ed422ae4ecf01000daede304890cff4e6edd2c1dc7a1bd8d6a413 not found: ID does not exist" containerID="89d71bd16e6ed422ae4ecf01000daede304890cff4e6edd2c1dc7a1bd8d6a413" Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.582145 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89d71bd16e6ed422ae4ecf01000daede304890cff4e6edd2c1dc7a1bd8d6a413"} err="failed to get container status \"89d71bd16e6ed422ae4ecf01000daede304890cff4e6edd2c1dc7a1bd8d6a413\": rpc error: code = NotFound desc = could not find container \"89d71bd16e6ed422ae4ecf01000daede304890cff4e6edd2c1dc7a1bd8d6a413\": container with ID starting with 89d71bd16e6ed422ae4ecf01000daede304890cff4e6edd2c1dc7a1bd8d6a413 not found: ID does not exist" Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.582166 4718 scope.go:117] "RemoveContainer" containerID="21ea77a50b310357b182d589423d92c33176a8df3730084d7749fee184e9bab9" Jan 23 17:23:49 crc kubenswrapper[4718]: E0123 17:23:49.584554 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21ea77a50b310357b182d589423d92c33176a8df3730084d7749fee184e9bab9\": container with ID starting with 21ea77a50b310357b182d589423d92c33176a8df3730084d7749fee184e9bab9 not found: ID does not exist" containerID="21ea77a50b310357b182d589423d92c33176a8df3730084d7749fee184e9bab9" Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.584583 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ea77a50b310357b182d589423d92c33176a8df3730084d7749fee184e9bab9"} err="failed to get container status \"21ea77a50b310357b182d589423d92c33176a8df3730084d7749fee184e9bab9\": rpc error: code = NotFound desc = could not find container \"21ea77a50b310357b182d589423d92c33176a8df3730084d7749fee184e9bab9\": container with ID starting with 21ea77a50b310357b182d589423d92c33176a8df3730084d7749fee184e9bab9 not found: ID does not exist" Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.584601 4718 scope.go:117] "RemoveContainer" containerID="c977523d7d04f60b85959d123cde7089f568ef0a03a98df0a1c7501b817c2f43" Jan 23 17:23:49 crc kubenswrapper[4718]: E0123 17:23:49.587230 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c977523d7d04f60b85959d123cde7089f568ef0a03a98df0a1c7501b817c2f43\": container with ID starting with c977523d7d04f60b85959d123cde7089f568ef0a03a98df0a1c7501b817c2f43 not found: ID does not exist" containerID="c977523d7d04f60b85959d123cde7089f568ef0a03a98df0a1c7501b817c2f43" Jan 23 17:23:49 crc kubenswrapper[4718]: I0123 17:23:49.587255 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c977523d7d04f60b85959d123cde7089f568ef0a03a98df0a1c7501b817c2f43"} err="failed to get container status \"c977523d7d04f60b85959d123cde7089f568ef0a03a98df0a1c7501b817c2f43\": rpc error: code = NotFound desc = could not find container \"c977523d7d04f60b85959d123cde7089f568ef0a03a98df0a1c7501b817c2f43\": container with ID starting with c977523d7d04f60b85959d123cde7089f568ef0a03a98df0a1c7501b817c2f43 not found: ID does not exist" Jan 23 17:23:51 crc kubenswrapper[4718]: I0123 17:23:51.154076 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="188ebe99-20ae-455a-babe-62f0df57972d" path="/var/lib/kubelet/pods/188ebe99-20ae-455a-babe-62f0df57972d/volumes" Jan 23 17:23:56 crc kubenswrapper[4718]: I0123 17:23:56.955106 4718 trace.go:236] Trace[1974660182]: "Calculate volume metrics of storage for pod minio-dev/minio" (23-Jan-2026 17:23:55.948) (total time: 1005ms): Jan 23 17:23:56 crc kubenswrapper[4718]: Trace[1974660182]: [1.00599197s] [1.00599197s] END Jan 23 17:23:59 crc kubenswrapper[4718]: I0123 17:23:59.159265 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:23:59 crc kubenswrapper[4718]: E0123 17:23:59.160357 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:24:12 crc kubenswrapper[4718]: I0123 17:24:12.140392 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:24:12 crc kubenswrapper[4718]: E0123 17:24:12.141233 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:24:25 crc kubenswrapper[4718]: I0123 17:24:25.144318 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:24:25 crc kubenswrapper[4718]: E0123 17:24:25.146367 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:24:36 crc kubenswrapper[4718]: I0123 17:24:36.141772 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:24:36 crc kubenswrapper[4718]: E0123 17:24:36.143027 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:24:51 crc kubenswrapper[4718]: I0123 17:24:51.140089 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:24:51 crc kubenswrapper[4718]: E0123 17:24:51.140904 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:25:05 crc kubenswrapper[4718]: I0123 17:25:05.144768 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:25:05 crc kubenswrapper[4718]: E0123 17:25:05.148014 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:25:19 crc kubenswrapper[4718]: I0123 17:25:19.140410 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:25:19 crc kubenswrapper[4718]: E0123 17:25:19.141371 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:25:32 crc kubenswrapper[4718]: I0123 17:25:32.140519 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:25:32 crc kubenswrapper[4718]: E0123 17:25:32.141307 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:25:38 crc kubenswrapper[4718]: I0123 17:25:38.870745 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nd5nw"] Jan 23 17:25:38 crc kubenswrapper[4718]: E0123 17:25:38.872059 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188ebe99-20ae-455a-babe-62f0df57972d" containerName="extract-utilities" Jan 23 17:25:38 crc kubenswrapper[4718]: I0123 17:25:38.872077 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="188ebe99-20ae-455a-babe-62f0df57972d" containerName="extract-utilities" Jan 23 17:25:38 crc kubenswrapper[4718]: E0123 17:25:38.872132 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188ebe99-20ae-455a-babe-62f0df57972d" containerName="registry-server" Jan 23 17:25:38 crc kubenswrapper[4718]: I0123 17:25:38.872141 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="188ebe99-20ae-455a-babe-62f0df57972d" containerName="registry-server" Jan 23 17:25:38 crc kubenswrapper[4718]: E0123 17:25:38.872177 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188ebe99-20ae-455a-babe-62f0df57972d" containerName="extract-content" Jan 23 17:25:38 crc kubenswrapper[4718]: I0123 17:25:38.872187 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="188ebe99-20ae-455a-babe-62f0df57972d" containerName="extract-content" Jan 23 17:25:38 crc kubenswrapper[4718]: I0123 17:25:38.872509 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="188ebe99-20ae-455a-babe-62f0df57972d" containerName="registry-server" Jan 23 17:25:38 crc kubenswrapper[4718]: I0123 17:25:38.874892 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nd5nw" Jan 23 17:25:38 crc kubenswrapper[4718]: I0123 17:25:38.888939 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nd5nw"] Jan 23 17:25:38 crc kubenswrapper[4718]: I0123 17:25:38.980177 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d672bf9-9570-4c3c-84b9-080326001619-utilities\") pod \"redhat-marketplace-nd5nw\" (UID: \"6d672bf9-9570-4c3c-84b9-080326001619\") " pod="openshift-marketplace/redhat-marketplace-nd5nw" Jan 23 17:25:38 crc kubenswrapper[4718]: I0123 17:25:38.980625 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d672bf9-9570-4c3c-84b9-080326001619-catalog-content\") pod \"redhat-marketplace-nd5nw\" (UID: \"6d672bf9-9570-4c3c-84b9-080326001619\") " pod="openshift-marketplace/redhat-marketplace-nd5nw" Jan 23 17:25:38 crc kubenswrapper[4718]: I0123 17:25:38.980696 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dglnp\" (UniqueName: \"kubernetes.io/projected/6d672bf9-9570-4c3c-84b9-080326001619-kube-api-access-dglnp\") pod \"redhat-marketplace-nd5nw\" (UID: \"6d672bf9-9570-4c3c-84b9-080326001619\") " pod="openshift-marketplace/redhat-marketplace-nd5nw" Jan 23 17:25:39 crc kubenswrapper[4718]: I0123 17:25:39.083935 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d672bf9-9570-4c3c-84b9-080326001619-utilities\") pod \"redhat-marketplace-nd5nw\" (UID: \"6d672bf9-9570-4c3c-84b9-080326001619\") " pod="openshift-marketplace/redhat-marketplace-nd5nw" Jan 23 17:25:39 crc kubenswrapper[4718]: I0123 17:25:39.083996 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d672bf9-9570-4c3c-84b9-080326001619-catalog-content\") pod \"redhat-marketplace-nd5nw\" (UID: \"6d672bf9-9570-4c3c-84b9-080326001619\") " pod="openshift-marketplace/redhat-marketplace-nd5nw" Jan 23 17:25:39 crc kubenswrapper[4718]: I0123 17:25:39.084043 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dglnp\" (UniqueName: \"kubernetes.io/projected/6d672bf9-9570-4c3c-84b9-080326001619-kube-api-access-dglnp\") pod \"redhat-marketplace-nd5nw\" (UID: \"6d672bf9-9570-4c3c-84b9-080326001619\") " pod="openshift-marketplace/redhat-marketplace-nd5nw" Jan 23 17:25:39 crc kubenswrapper[4718]: I0123 17:25:39.084567 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d672bf9-9570-4c3c-84b9-080326001619-utilities\") pod \"redhat-marketplace-nd5nw\" (UID: \"6d672bf9-9570-4c3c-84b9-080326001619\") " pod="openshift-marketplace/redhat-marketplace-nd5nw" Jan 23 17:25:39 crc kubenswrapper[4718]: I0123 17:25:39.084582 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d672bf9-9570-4c3c-84b9-080326001619-catalog-content\") pod \"redhat-marketplace-nd5nw\" (UID: \"6d672bf9-9570-4c3c-84b9-080326001619\") " pod="openshift-marketplace/redhat-marketplace-nd5nw" Jan 23 17:25:39 crc kubenswrapper[4718]: I0123 17:25:39.111839 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dglnp\" (UniqueName: \"kubernetes.io/projected/6d672bf9-9570-4c3c-84b9-080326001619-kube-api-access-dglnp\") pod \"redhat-marketplace-nd5nw\" (UID: \"6d672bf9-9570-4c3c-84b9-080326001619\") " pod="openshift-marketplace/redhat-marketplace-nd5nw" Jan 23 17:25:39 crc kubenswrapper[4718]: I0123 17:25:39.203883 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nd5nw" Jan 23 17:25:39 crc kubenswrapper[4718]: I0123 17:25:39.736499 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nd5nw"] Jan 23 17:25:40 crc kubenswrapper[4718]: I0123 17:25:40.453351 4718 generic.go:334] "Generic (PLEG): container finished" podID="6d672bf9-9570-4c3c-84b9-080326001619" containerID="8d53d3b5dc2f4721edbf5bfbcc491216068317f833cc1682203eb453dc761ead" exitCode=0 Jan 23 17:25:40 crc kubenswrapper[4718]: I0123 17:25:40.453695 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd5nw" event={"ID":"6d672bf9-9570-4c3c-84b9-080326001619","Type":"ContainerDied","Data":"8d53d3b5dc2f4721edbf5bfbcc491216068317f833cc1682203eb453dc761ead"} Jan 23 17:25:40 crc kubenswrapper[4718]: I0123 17:25:40.453724 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd5nw" event={"ID":"6d672bf9-9570-4c3c-84b9-080326001619","Type":"ContainerStarted","Data":"c99c1fd06841eb3aa4dc3f22a26c13eee4557d8721aff394a0dbf9ba9d198238"} Jan 23 17:25:41 crc kubenswrapper[4718]: I0123 17:25:41.465151 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd5nw" event={"ID":"6d672bf9-9570-4c3c-84b9-080326001619","Type":"ContainerStarted","Data":"b1a9fbba94b6c95b066631af68258d9c6f760921348c73fea5292241a53c6e5f"} Jan 23 17:25:42 crc kubenswrapper[4718]: I0123 17:25:42.479847 4718 generic.go:334] "Generic (PLEG): container finished" podID="6d672bf9-9570-4c3c-84b9-080326001619" containerID="b1a9fbba94b6c95b066631af68258d9c6f760921348c73fea5292241a53c6e5f" exitCode=0 Jan 23 17:25:42 crc kubenswrapper[4718]: I0123 17:25:42.479936 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd5nw" event={"ID":"6d672bf9-9570-4c3c-84b9-080326001619","Type":"ContainerDied","Data":"b1a9fbba94b6c95b066631af68258d9c6f760921348c73fea5292241a53c6e5f"} Jan 23 17:25:43 crc kubenswrapper[4718]: I0123 17:25:43.492708 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd5nw" event={"ID":"6d672bf9-9570-4c3c-84b9-080326001619","Type":"ContainerStarted","Data":"69538d4a864443f5770bb5a562300b751b716252cf92ff6b3fc1f1f1bd930615"} Jan 23 17:25:43 crc kubenswrapper[4718]: I0123 17:25:43.532222 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nd5nw" podStartSLOduration=2.98981401 podStartE2EDuration="5.532196463s" podCreationTimestamp="2026-01-23 17:25:38 +0000 UTC" firstStartedPulling="2026-01-23 17:25:40.456493354 +0000 UTC m=+4141.603735345" lastFinishedPulling="2026-01-23 17:25:42.998875787 +0000 UTC m=+4144.146117798" observedRunningTime="2026-01-23 17:25:43.518738746 +0000 UTC m=+4144.665980747" watchObservedRunningTime="2026-01-23 17:25:43.532196463 +0000 UTC m=+4144.679438464" Jan 23 17:25:47 crc kubenswrapper[4718]: I0123 17:25:47.141269 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:25:47 crc kubenswrapper[4718]: E0123 17:25:47.143330 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:25:49 crc kubenswrapper[4718]: I0123 17:25:49.204722 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nd5nw" Jan 23 17:25:49 crc kubenswrapper[4718]: I0123 17:25:49.205072 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nd5nw" Jan 23 17:25:49 crc kubenswrapper[4718]: I0123 17:25:49.258688 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nd5nw" Jan 23 17:25:49 crc kubenswrapper[4718]: I0123 17:25:49.636194 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nd5nw" Jan 23 17:25:49 crc kubenswrapper[4718]: I0123 17:25:49.699602 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nd5nw"] Jan 23 17:25:51 crc kubenswrapper[4718]: I0123 17:25:51.588004 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nd5nw" podUID="6d672bf9-9570-4c3c-84b9-080326001619" containerName="registry-server" containerID="cri-o://69538d4a864443f5770bb5a562300b751b716252cf92ff6b3fc1f1f1bd930615" gracePeriod=2 Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.126049 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nd5nw" Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.225437 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dglnp\" (UniqueName: \"kubernetes.io/projected/6d672bf9-9570-4c3c-84b9-080326001619-kube-api-access-dglnp\") pod \"6d672bf9-9570-4c3c-84b9-080326001619\" (UID: \"6d672bf9-9570-4c3c-84b9-080326001619\") " Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.225968 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d672bf9-9570-4c3c-84b9-080326001619-utilities\") pod \"6d672bf9-9570-4c3c-84b9-080326001619\" (UID: \"6d672bf9-9570-4c3c-84b9-080326001619\") " Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.226205 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d672bf9-9570-4c3c-84b9-080326001619-catalog-content\") pod \"6d672bf9-9570-4c3c-84b9-080326001619\" (UID: \"6d672bf9-9570-4c3c-84b9-080326001619\") " Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.226916 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d672bf9-9570-4c3c-84b9-080326001619-utilities" (OuterVolumeSpecName: "utilities") pod "6d672bf9-9570-4c3c-84b9-080326001619" (UID: "6d672bf9-9570-4c3c-84b9-080326001619"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.227776 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d672bf9-9570-4c3c-84b9-080326001619-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.232001 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d672bf9-9570-4c3c-84b9-080326001619-kube-api-access-dglnp" (OuterVolumeSpecName: "kube-api-access-dglnp") pod "6d672bf9-9570-4c3c-84b9-080326001619" (UID: "6d672bf9-9570-4c3c-84b9-080326001619"). InnerVolumeSpecName "kube-api-access-dglnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.253870 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d672bf9-9570-4c3c-84b9-080326001619-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d672bf9-9570-4c3c-84b9-080326001619" (UID: "6d672bf9-9570-4c3c-84b9-080326001619"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.330843 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d672bf9-9570-4c3c-84b9-080326001619-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.330880 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dglnp\" (UniqueName: \"kubernetes.io/projected/6d672bf9-9570-4c3c-84b9-080326001619-kube-api-access-dglnp\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.604297 4718 generic.go:334] "Generic (PLEG): container finished" podID="6d672bf9-9570-4c3c-84b9-080326001619" containerID="69538d4a864443f5770bb5a562300b751b716252cf92ff6b3fc1f1f1bd930615" exitCode=0 Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.604351 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd5nw" event={"ID":"6d672bf9-9570-4c3c-84b9-080326001619","Type":"ContainerDied","Data":"69538d4a864443f5770bb5a562300b751b716252cf92ff6b3fc1f1f1bd930615"} Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.604387 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd5nw" event={"ID":"6d672bf9-9570-4c3c-84b9-080326001619","Type":"ContainerDied","Data":"c99c1fd06841eb3aa4dc3f22a26c13eee4557d8721aff394a0dbf9ba9d198238"} Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.604408 4718 scope.go:117] "RemoveContainer" containerID="69538d4a864443f5770bb5a562300b751b716252cf92ff6b3fc1f1f1bd930615" Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.604403 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nd5nw" Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.639384 4718 scope.go:117] "RemoveContainer" containerID="b1a9fbba94b6c95b066631af68258d9c6f760921348c73fea5292241a53c6e5f" Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.647168 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nd5nw"] Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.662859 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nd5nw"] Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.683305 4718 scope.go:117] "RemoveContainer" containerID="8d53d3b5dc2f4721edbf5bfbcc491216068317f833cc1682203eb453dc761ead" Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.723388 4718 scope.go:117] "RemoveContainer" containerID="69538d4a864443f5770bb5a562300b751b716252cf92ff6b3fc1f1f1bd930615" Jan 23 17:25:52 crc kubenswrapper[4718]: E0123 17:25:52.724039 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69538d4a864443f5770bb5a562300b751b716252cf92ff6b3fc1f1f1bd930615\": container with ID starting with 69538d4a864443f5770bb5a562300b751b716252cf92ff6b3fc1f1f1bd930615 not found: ID does not exist" containerID="69538d4a864443f5770bb5a562300b751b716252cf92ff6b3fc1f1f1bd930615" Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.724088 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69538d4a864443f5770bb5a562300b751b716252cf92ff6b3fc1f1f1bd930615"} err="failed to get container status \"69538d4a864443f5770bb5a562300b751b716252cf92ff6b3fc1f1f1bd930615\": rpc error: code = NotFound desc = could not find container \"69538d4a864443f5770bb5a562300b751b716252cf92ff6b3fc1f1f1bd930615\": container with ID starting with 69538d4a864443f5770bb5a562300b751b716252cf92ff6b3fc1f1f1bd930615 not found: ID does not exist" Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.724116 4718 scope.go:117] "RemoveContainer" containerID="b1a9fbba94b6c95b066631af68258d9c6f760921348c73fea5292241a53c6e5f" Jan 23 17:25:52 crc kubenswrapper[4718]: E0123 17:25:52.724552 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1a9fbba94b6c95b066631af68258d9c6f760921348c73fea5292241a53c6e5f\": container with ID starting with b1a9fbba94b6c95b066631af68258d9c6f760921348c73fea5292241a53c6e5f not found: ID does not exist" containerID="b1a9fbba94b6c95b066631af68258d9c6f760921348c73fea5292241a53c6e5f" Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.724575 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1a9fbba94b6c95b066631af68258d9c6f760921348c73fea5292241a53c6e5f"} err="failed to get container status \"b1a9fbba94b6c95b066631af68258d9c6f760921348c73fea5292241a53c6e5f\": rpc error: code = NotFound desc = could not find container \"b1a9fbba94b6c95b066631af68258d9c6f760921348c73fea5292241a53c6e5f\": container with ID starting with b1a9fbba94b6c95b066631af68258d9c6f760921348c73fea5292241a53c6e5f not found: ID does not exist" Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.724591 4718 scope.go:117] "RemoveContainer" containerID="8d53d3b5dc2f4721edbf5bfbcc491216068317f833cc1682203eb453dc761ead" Jan 23 17:25:52 crc kubenswrapper[4718]: E0123 17:25:52.724950 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d53d3b5dc2f4721edbf5bfbcc491216068317f833cc1682203eb453dc761ead\": container with ID starting with 8d53d3b5dc2f4721edbf5bfbcc491216068317f833cc1682203eb453dc761ead not found: ID does not exist" containerID="8d53d3b5dc2f4721edbf5bfbcc491216068317f833cc1682203eb453dc761ead" Jan 23 17:25:52 crc kubenswrapper[4718]: I0123 17:25:52.724993 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d53d3b5dc2f4721edbf5bfbcc491216068317f833cc1682203eb453dc761ead"} err="failed to get container status \"8d53d3b5dc2f4721edbf5bfbcc491216068317f833cc1682203eb453dc761ead\": rpc error: code = NotFound desc = could not find container \"8d53d3b5dc2f4721edbf5bfbcc491216068317f833cc1682203eb453dc761ead\": container with ID starting with 8d53d3b5dc2f4721edbf5bfbcc491216068317f833cc1682203eb453dc761ead not found: ID does not exist" Jan 23 17:25:53 crc kubenswrapper[4718]: I0123 17:25:53.159831 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d672bf9-9570-4c3c-84b9-080326001619" path="/var/lib/kubelet/pods/6d672bf9-9570-4c3c-84b9-080326001619/volumes" Jan 23 17:25:58 crc kubenswrapper[4718]: I0123 17:25:58.140490 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:25:58 crc kubenswrapper[4718]: E0123 17:25:58.141576 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:26:10 crc kubenswrapper[4718]: I0123 17:26:10.140010 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:26:10 crc kubenswrapper[4718]: I0123 17:26:10.818356 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"254eee1580cf0052b6d751c078df48be914c1db972e3c4b13c83302f29b68574"} Jan 23 17:28:05 crc kubenswrapper[4718]: I0123 17:28:05.083302 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xgtkb"] Jan 23 17:28:05 crc kubenswrapper[4718]: E0123 17:28:05.084252 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d672bf9-9570-4c3c-84b9-080326001619" containerName="extract-content" Jan 23 17:28:05 crc kubenswrapper[4718]: I0123 17:28:05.084264 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d672bf9-9570-4c3c-84b9-080326001619" containerName="extract-content" Jan 23 17:28:05 crc kubenswrapper[4718]: E0123 17:28:05.084276 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d672bf9-9570-4c3c-84b9-080326001619" containerName="registry-server" Jan 23 17:28:05 crc kubenswrapper[4718]: I0123 17:28:05.084281 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d672bf9-9570-4c3c-84b9-080326001619" containerName="registry-server" Jan 23 17:28:05 crc kubenswrapper[4718]: E0123 17:28:05.084320 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d672bf9-9570-4c3c-84b9-080326001619" containerName="extract-utilities" Jan 23 17:28:05 crc kubenswrapper[4718]: I0123 17:28:05.084327 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d672bf9-9570-4c3c-84b9-080326001619" containerName="extract-utilities" Jan 23 17:28:05 crc kubenswrapper[4718]: I0123 17:28:05.084553 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d672bf9-9570-4c3c-84b9-080326001619" containerName="registry-server" Jan 23 17:28:05 crc kubenswrapper[4718]: I0123 17:28:05.086293 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgtkb" Jan 23 17:28:05 crc kubenswrapper[4718]: I0123 17:28:05.097123 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgtkb"] Jan 23 17:28:05 crc kubenswrapper[4718]: I0123 17:28:05.152952 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxx72\" (UniqueName: \"kubernetes.io/projected/d1a58bac-32c3-49cc-b942-f1c035705bd8-kube-api-access-lxx72\") pod \"certified-operators-xgtkb\" (UID: \"d1a58bac-32c3-49cc-b942-f1c035705bd8\") " pod="openshift-marketplace/certified-operators-xgtkb" Jan 23 17:28:05 crc kubenswrapper[4718]: I0123 17:28:05.153003 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a58bac-32c3-49cc-b942-f1c035705bd8-utilities\") pod \"certified-operators-xgtkb\" (UID: \"d1a58bac-32c3-49cc-b942-f1c035705bd8\") " pod="openshift-marketplace/certified-operators-xgtkb" Jan 23 17:28:05 crc kubenswrapper[4718]: I0123 17:28:05.153064 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a58bac-32c3-49cc-b942-f1c035705bd8-catalog-content\") pod \"certified-operators-xgtkb\" (UID: \"d1a58bac-32c3-49cc-b942-f1c035705bd8\") " pod="openshift-marketplace/certified-operators-xgtkb" Jan 23 17:28:05 crc kubenswrapper[4718]: I0123 17:28:05.256189 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxx72\" (UniqueName: \"kubernetes.io/projected/d1a58bac-32c3-49cc-b942-f1c035705bd8-kube-api-access-lxx72\") pod \"certified-operators-xgtkb\" (UID: \"d1a58bac-32c3-49cc-b942-f1c035705bd8\") " pod="openshift-marketplace/certified-operators-xgtkb" Jan 23 17:28:05 crc kubenswrapper[4718]: I0123 17:28:05.256481 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a58bac-32c3-49cc-b942-f1c035705bd8-utilities\") pod \"certified-operators-xgtkb\" (UID: \"d1a58bac-32c3-49cc-b942-f1c035705bd8\") " pod="openshift-marketplace/certified-operators-xgtkb" Jan 23 17:28:05 crc kubenswrapper[4718]: I0123 17:28:05.256622 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a58bac-32c3-49cc-b942-f1c035705bd8-catalog-content\") pod \"certified-operators-xgtkb\" (UID: \"d1a58bac-32c3-49cc-b942-f1c035705bd8\") " pod="openshift-marketplace/certified-operators-xgtkb" Jan 23 17:28:05 crc kubenswrapper[4718]: I0123 17:28:05.257050 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a58bac-32c3-49cc-b942-f1c035705bd8-utilities\") pod \"certified-operators-xgtkb\" (UID: \"d1a58bac-32c3-49cc-b942-f1c035705bd8\") " pod="openshift-marketplace/certified-operators-xgtkb" Jan 23 17:28:05 crc kubenswrapper[4718]: I0123 17:28:05.257063 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a58bac-32c3-49cc-b942-f1c035705bd8-catalog-content\") pod \"certified-operators-xgtkb\" (UID: \"d1a58bac-32c3-49cc-b942-f1c035705bd8\") " pod="openshift-marketplace/certified-operators-xgtkb" Jan 23 17:28:05 crc kubenswrapper[4718]: I0123 17:28:05.279452 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxx72\" (UniqueName: \"kubernetes.io/projected/d1a58bac-32c3-49cc-b942-f1c035705bd8-kube-api-access-lxx72\") pod \"certified-operators-xgtkb\" (UID: \"d1a58bac-32c3-49cc-b942-f1c035705bd8\") " pod="openshift-marketplace/certified-operators-xgtkb" Jan 23 17:28:05 crc kubenswrapper[4718]: I0123 17:28:05.420322 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgtkb" Jan 23 17:28:05 crc kubenswrapper[4718]: I0123 17:28:05.975651 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgtkb"] Jan 23 17:28:06 crc kubenswrapper[4718]: I0123 17:28:06.131163 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgtkb" event={"ID":"d1a58bac-32c3-49cc-b942-f1c035705bd8","Type":"ContainerStarted","Data":"82603793eb83f380eab78c8bf41ec2040d2c3d66b8f26995369c70f2453e177f"} Jan 23 17:28:07 crc kubenswrapper[4718]: I0123 17:28:07.150557 4718 generic.go:334] "Generic (PLEG): container finished" podID="d1a58bac-32c3-49cc-b942-f1c035705bd8" containerID="4bf307990fa5fa22b0de3a35dee9ef276e47581ba026fe3a68e8f8953ff7c677" exitCode=0 Jan 23 17:28:07 crc kubenswrapper[4718]: I0123 17:28:07.161674 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgtkb" event={"ID":"d1a58bac-32c3-49cc-b942-f1c035705bd8","Type":"ContainerDied","Data":"4bf307990fa5fa22b0de3a35dee9ef276e47581ba026fe3a68e8f8953ff7c677"} Jan 23 17:28:09 crc kubenswrapper[4718]: I0123 17:28:09.175510 4718 generic.go:334] "Generic (PLEG): container finished" podID="d1a58bac-32c3-49cc-b942-f1c035705bd8" containerID="e4d839be8ae6d92b7e98fcf36a4c12bef680d3fee2393a299e473982fbcacdd1" exitCode=0 Jan 23 17:28:09 crc kubenswrapper[4718]: I0123 17:28:09.175645 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgtkb" event={"ID":"d1a58bac-32c3-49cc-b942-f1c035705bd8","Type":"ContainerDied","Data":"e4d839be8ae6d92b7e98fcf36a4c12bef680d3fee2393a299e473982fbcacdd1"} Jan 23 17:28:09 crc kubenswrapper[4718]: I0123 17:28:09.294509 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-97b6n"] Jan 23 17:28:09 crc kubenswrapper[4718]: I0123 17:28:09.298870 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97b6n" Jan 23 17:28:09 crc kubenswrapper[4718]: I0123 17:28:09.319146 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-97b6n"] Jan 23 17:28:09 crc kubenswrapper[4718]: I0123 17:28:09.471385 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f450d126-98df-4846-98a1-60ad879d2db1-utilities\") pod \"redhat-operators-97b6n\" (UID: \"f450d126-98df-4846-98a1-60ad879d2db1\") " pod="openshift-marketplace/redhat-operators-97b6n" Jan 23 17:28:09 crc kubenswrapper[4718]: I0123 17:28:09.472058 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f450d126-98df-4846-98a1-60ad879d2db1-catalog-content\") pod \"redhat-operators-97b6n\" (UID: \"f450d126-98df-4846-98a1-60ad879d2db1\") " pod="openshift-marketplace/redhat-operators-97b6n" Jan 23 17:28:09 crc kubenswrapper[4718]: I0123 17:28:09.472145 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf7wl\" (UniqueName: \"kubernetes.io/projected/f450d126-98df-4846-98a1-60ad879d2db1-kube-api-access-vf7wl\") pod \"redhat-operators-97b6n\" (UID: \"f450d126-98df-4846-98a1-60ad879d2db1\") " pod="openshift-marketplace/redhat-operators-97b6n" Jan 23 17:28:09 crc kubenswrapper[4718]: I0123 17:28:09.575229 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f450d126-98df-4846-98a1-60ad879d2db1-utilities\") pod \"redhat-operators-97b6n\" (UID: \"f450d126-98df-4846-98a1-60ad879d2db1\") " pod="openshift-marketplace/redhat-operators-97b6n" Jan 23 17:28:09 crc kubenswrapper[4718]: I0123 17:28:09.575358 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f450d126-98df-4846-98a1-60ad879d2db1-catalog-content\") pod \"redhat-operators-97b6n\" (UID: \"f450d126-98df-4846-98a1-60ad879d2db1\") " pod="openshift-marketplace/redhat-operators-97b6n" Jan 23 17:28:09 crc kubenswrapper[4718]: I0123 17:28:09.575401 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf7wl\" (UniqueName: \"kubernetes.io/projected/f450d126-98df-4846-98a1-60ad879d2db1-kube-api-access-vf7wl\") pod \"redhat-operators-97b6n\" (UID: \"f450d126-98df-4846-98a1-60ad879d2db1\") " pod="openshift-marketplace/redhat-operators-97b6n" Jan 23 17:28:09 crc kubenswrapper[4718]: I0123 17:28:09.576268 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f450d126-98df-4846-98a1-60ad879d2db1-catalog-content\") pod \"redhat-operators-97b6n\" (UID: \"f450d126-98df-4846-98a1-60ad879d2db1\") " pod="openshift-marketplace/redhat-operators-97b6n" Jan 23 17:28:09 crc kubenswrapper[4718]: I0123 17:28:09.576534 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f450d126-98df-4846-98a1-60ad879d2db1-utilities\") pod \"redhat-operators-97b6n\" (UID: \"f450d126-98df-4846-98a1-60ad879d2db1\") " pod="openshift-marketplace/redhat-operators-97b6n" Jan 23 17:28:09 crc kubenswrapper[4718]: I0123 17:28:09.623431 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf7wl\" (UniqueName: \"kubernetes.io/projected/f450d126-98df-4846-98a1-60ad879d2db1-kube-api-access-vf7wl\") pod \"redhat-operators-97b6n\" (UID: \"f450d126-98df-4846-98a1-60ad879d2db1\") " pod="openshift-marketplace/redhat-operators-97b6n" Jan 23 17:28:09 crc kubenswrapper[4718]: I0123 17:28:09.635368 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97b6n" Jan 23 17:28:10 crc kubenswrapper[4718]: I0123 17:28:10.176088 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-97b6n"] Jan 23 17:28:10 crc kubenswrapper[4718]: W0123 17:28:10.184714 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf450d126_98df_4846_98a1_60ad879d2db1.slice/crio-234cff1b1c198a5fcec05759e1cbc4a968b3152f7cedb41c3ef3dd8d535eb0f5 WatchSource:0}: Error finding container 234cff1b1c198a5fcec05759e1cbc4a968b3152f7cedb41c3ef3dd8d535eb0f5: Status 404 returned error can't find the container with id 234cff1b1c198a5fcec05759e1cbc4a968b3152f7cedb41c3ef3dd8d535eb0f5 Jan 23 17:28:10 crc kubenswrapper[4718]: I0123 17:28:10.191373 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgtkb" event={"ID":"d1a58bac-32c3-49cc-b942-f1c035705bd8","Type":"ContainerStarted","Data":"e77b33726f7de557b67ff0fe8143514bfcc2d4cb70199b0a07ea7326704f2aa6"} Jan 23 17:28:11 crc kubenswrapper[4718]: I0123 17:28:11.201198 4718 generic.go:334] "Generic (PLEG): container finished" podID="f450d126-98df-4846-98a1-60ad879d2db1" containerID="4df1ba85513f8e381b6d905ffe195364f7e35f26d3e1f490293ee7bc6c8b5643" exitCode=0 Jan 23 17:28:11 crc kubenswrapper[4718]: I0123 17:28:11.201253 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97b6n" event={"ID":"f450d126-98df-4846-98a1-60ad879d2db1","Type":"ContainerDied","Data":"4df1ba85513f8e381b6d905ffe195364f7e35f26d3e1f490293ee7bc6c8b5643"} Jan 23 17:28:11 crc kubenswrapper[4718]: I0123 17:28:11.201515 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97b6n" event={"ID":"f450d126-98df-4846-98a1-60ad879d2db1","Type":"ContainerStarted","Data":"234cff1b1c198a5fcec05759e1cbc4a968b3152f7cedb41c3ef3dd8d535eb0f5"} Jan 23 17:28:11 crc kubenswrapper[4718]: I0123 17:28:11.229161 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xgtkb" podStartSLOduration=3.714320087 podStartE2EDuration="6.229144561s" podCreationTimestamp="2026-01-23 17:28:05 +0000 UTC" firstStartedPulling="2026-01-23 17:28:07.153395725 +0000 UTC m=+4288.300637716" lastFinishedPulling="2026-01-23 17:28:09.668220189 +0000 UTC m=+4290.815462190" observedRunningTime="2026-01-23 17:28:10.216570362 +0000 UTC m=+4291.363812353" watchObservedRunningTime="2026-01-23 17:28:11.229144561 +0000 UTC m=+4292.376386552" Jan 23 17:28:13 crc kubenswrapper[4718]: I0123 17:28:13.228818 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97b6n" event={"ID":"f450d126-98df-4846-98a1-60ad879d2db1","Type":"ContainerStarted","Data":"e24b965031138578e20c3ac57480b84b4e6562bb026c3aa6fa19b6085a1e2abd"} Jan 23 17:28:15 crc kubenswrapper[4718]: I0123 17:28:15.254365 4718 generic.go:334] "Generic (PLEG): container finished" podID="f450d126-98df-4846-98a1-60ad879d2db1" containerID="e24b965031138578e20c3ac57480b84b4e6562bb026c3aa6fa19b6085a1e2abd" exitCode=0 Jan 23 17:28:15 crc kubenswrapper[4718]: I0123 17:28:15.254421 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97b6n" event={"ID":"f450d126-98df-4846-98a1-60ad879d2db1","Type":"ContainerDied","Data":"e24b965031138578e20c3ac57480b84b4e6562bb026c3aa6fa19b6085a1e2abd"} Jan 23 17:28:15 crc kubenswrapper[4718]: I0123 17:28:15.421190 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xgtkb" Jan 23 17:28:15 crc kubenswrapper[4718]: I0123 17:28:15.421286 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xgtkb" Jan 23 17:28:15 crc kubenswrapper[4718]: I0123 17:28:15.908681 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xgtkb" Jan 23 17:28:16 crc kubenswrapper[4718]: I0123 17:28:16.272162 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97b6n" event={"ID":"f450d126-98df-4846-98a1-60ad879d2db1","Type":"ContainerStarted","Data":"6ee0aa3bb509a0670f343d3d3fb11226b127684357ef27b61ee19d57070091ae"} Jan 23 17:28:16 crc kubenswrapper[4718]: I0123 17:28:16.302766 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-97b6n" podStartSLOduration=2.73943411 podStartE2EDuration="7.302744347s" podCreationTimestamp="2026-01-23 17:28:09 +0000 UTC" firstStartedPulling="2026-01-23 17:28:11.204714147 +0000 UTC m=+4292.351956138" lastFinishedPulling="2026-01-23 17:28:15.768024384 +0000 UTC m=+4296.915266375" observedRunningTime="2026-01-23 17:28:16.292304192 +0000 UTC m=+4297.439546183" watchObservedRunningTime="2026-01-23 17:28:16.302744347 +0000 UTC m=+4297.449986338" Jan 23 17:28:16 crc kubenswrapper[4718]: I0123 17:28:16.334024 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xgtkb" Jan 23 17:28:17 crc kubenswrapper[4718]: I0123 17:28:17.876473 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xgtkb"] Jan 23 17:28:18 crc kubenswrapper[4718]: I0123 17:28:18.289724 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xgtkb" podUID="d1a58bac-32c3-49cc-b942-f1c035705bd8" containerName="registry-server" containerID="cri-o://e77b33726f7de557b67ff0fe8143514bfcc2d4cb70199b0a07ea7326704f2aa6" gracePeriod=2 Jan 23 17:28:19 crc kubenswrapper[4718]: I0123 17:28:19.306287 4718 generic.go:334] "Generic (PLEG): container finished" podID="d1a58bac-32c3-49cc-b942-f1c035705bd8" containerID="e77b33726f7de557b67ff0fe8143514bfcc2d4cb70199b0a07ea7326704f2aa6" exitCode=0 Jan 23 17:28:19 crc kubenswrapper[4718]: I0123 17:28:19.306384 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgtkb" event={"ID":"d1a58bac-32c3-49cc-b942-f1c035705bd8","Type":"ContainerDied","Data":"e77b33726f7de557b67ff0fe8143514bfcc2d4cb70199b0a07ea7326704f2aa6"} Jan 23 17:28:19 crc kubenswrapper[4718]: I0123 17:28:19.673262 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-97b6n" Jan 23 17:28:19 crc kubenswrapper[4718]: I0123 17:28:19.673416 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-97b6n" Jan 23 17:28:20 crc kubenswrapper[4718]: I0123 17:28:20.078076 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgtkb" Jan 23 17:28:20 crc kubenswrapper[4718]: I0123 17:28:20.193810 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxx72\" (UniqueName: \"kubernetes.io/projected/d1a58bac-32c3-49cc-b942-f1c035705bd8-kube-api-access-lxx72\") pod \"d1a58bac-32c3-49cc-b942-f1c035705bd8\" (UID: \"d1a58bac-32c3-49cc-b942-f1c035705bd8\") " Jan 23 17:28:20 crc kubenswrapper[4718]: I0123 17:28:20.193940 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a58bac-32c3-49cc-b942-f1c035705bd8-utilities\") pod \"d1a58bac-32c3-49cc-b942-f1c035705bd8\" (UID: \"d1a58bac-32c3-49cc-b942-f1c035705bd8\") " Jan 23 17:28:20 crc kubenswrapper[4718]: I0123 17:28:20.194626 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a58bac-32c3-49cc-b942-f1c035705bd8-catalog-content\") pod \"d1a58bac-32c3-49cc-b942-f1c035705bd8\" (UID: \"d1a58bac-32c3-49cc-b942-f1c035705bd8\") " Jan 23 17:28:20 crc kubenswrapper[4718]: I0123 17:28:20.195174 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1a58bac-32c3-49cc-b942-f1c035705bd8-utilities" (OuterVolumeSpecName: "utilities") pod "d1a58bac-32c3-49cc-b942-f1c035705bd8" (UID: "d1a58bac-32c3-49cc-b942-f1c035705bd8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:28:20 crc kubenswrapper[4718]: I0123 17:28:20.195625 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a58bac-32c3-49cc-b942-f1c035705bd8-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:28:20 crc kubenswrapper[4718]: I0123 17:28:20.201497 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1a58bac-32c3-49cc-b942-f1c035705bd8-kube-api-access-lxx72" (OuterVolumeSpecName: "kube-api-access-lxx72") pod "d1a58bac-32c3-49cc-b942-f1c035705bd8" (UID: "d1a58bac-32c3-49cc-b942-f1c035705bd8"). InnerVolumeSpecName "kube-api-access-lxx72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:28:20 crc kubenswrapper[4718]: I0123 17:28:20.238392 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1a58bac-32c3-49cc-b942-f1c035705bd8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1a58bac-32c3-49cc-b942-f1c035705bd8" (UID: "d1a58bac-32c3-49cc-b942-f1c035705bd8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:28:20 crc kubenswrapper[4718]: I0123 17:28:20.298159 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a58bac-32c3-49cc-b942-f1c035705bd8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:28:20 crc kubenswrapper[4718]: I0123 17:28:20.298216 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxx72\" (UniqueName: \"kubernetes.io/projected/d1a58bac-32c3-49cc-b942-f1c035705bd8-kube-api-access-lxx72\") on node \"crc\" DevicePath \"\"" Jan 23 17:28:20 crc kubenswrapper[4718]: I0123 17:28:20.327863 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgtkb" event={"ID":"d1a58bac-32c3-49cc-b942-f1c035705bd8","Type":"ContainerDied","Data":"82603793eb83f380eab78c8bf41ec2040d2c3d66b8f26995369c70f2453e177f"} Jan 23 17:28:20 crc kubenswrapper[4718]: I0123 17:28:20.327937 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgtkb" Jan 23 17:28:20 crc kubenswrapper[4718]: I0123 17:28:20.327979 4718 scope.go:117] "RemoveContainer" containerID="e77b33726f7de557b67ff0fe8143514bfcc2d4cb70199b0a07ea7326704f2aa6" Jan 23 17:28:20 crc kubenswrapper[4718]: I0123 17:28:20.351608 4718 scope.go:117] "RemoveContainer" containerID="e4d839be8ae6d92b7e98fcf36a4c12bef680d3fee2393a299e473982fbcacdd1" Jan 23 17:28:20 crc kubenswrapper[4718]: I0123 17:28:20.383938 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xgtkb"] Jan 23 17:28:20 crc kubenswrapper[4718]: I0123 17:28:20.400047 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xgtkb"] Jan 23 17:28:20 crc kubenswrapper[4718]: I0123 17:28:20.412878 4718 scope.go:117] "RemoveContainer" containerID="4bf307990fa5fa22b0de3a35dee9ef276e47581ba026fe3a68e8f8953ff7c677" Jan 23 17:28:20 crc kubenswrapper[4718]: I0123 17:28:20.726806 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-97b6n" podUID="f450d126-98df-4846-98a1-60ad879d2db1" containerName="registry-server" probeResult="failure" output=< Jan 23 17:28:20 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Jan 23 17:28:20 crc kubenswrapper[4718]: > Jan 23 17:28:21 crc kubenswrapper[4718]: I0123 17:28:21.153899 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1a58bac-32c3-49cc-b942-f1c035705bd8" path="/var/lib/kubelet/pods/d1a58bac-32c3-49cc-b942-f1c035705bd8/volumes" Jan 23 17:28:28 crc kubenswrapper[4718]: I0123 17:28:28.875525 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:28:28 crc kubenswrapper[4718]: I0123 17:28:28.876141 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:28:29 crc kubenswrapper[4718]: I0123 17:28:29.720537 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-97b6n" Jan 23 17:28:29 crc kubenswrapper[4718]: I0123 17:28:29.782912 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-97b6n" Jan 23 17:28:30 crc kubenswrapper[4718]: I0123 17:28:30.686401 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-97b6n"] Jan 23 17:28:31 crc kubenswrapper[4718]: I0123 17:28:31.451071 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-97b6n" podUID="f450d126-98df-4846-98a1-60ad879d2db1" containerName="registry-server" containerID="cri-o://6ee0aa3bb509a0670f343d3d3fb11226b127684357ef27b61ee19d57070091ae" gracePeriod=2 Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.070602 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97b6n" Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.219383 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf7wl\" (UniqueName: \"kubernetes.io/projected/f450d126-98df-4846-98a1-60ad879d2db1-kube-api-access-vf7wl\") pod \"f450d126-98df-4846-98a1-60ad879d2db1\" (UID: \"f450d126-98df-4846-98a1-60ad879d2db1\") " Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.219458 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f450d126-98df-4846-98a1-60ad879d2db1-catalog-content\") pod \"f450d126-98df-4846-98a1-60ad879d2db1\" (UID: \"f450d126-98df-4846-98a1-60ad879d2db1\") " Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.219544 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f450d126-98df-4846-98a1-60ad879d2db1-utilities\") pod \"f450d126-98df-4846-98a1-60ad879d2db1\" (UID: \"f450d126-98df-4846-98a1-60ad879d2db1\") " Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.220454 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f450d126-98df-4846-98a1-60ad879d2db1-utilities" (OuterVolumeSpecName: "utilities") pod "f450d126-98df-4846-98a1-60ad879d2db1" (UID: "f450d126-98df-4846-98a1-60ad879d2db1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.221794 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f450d126-98df-4846-98a1-60ad879d2db1-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.227488 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f450d126-98df-4846-98a1-60ad879d2db1-kube-api-access-vf7wl" (OuterVolumeSpecName: "kube-api-access-vf7wl") pod "f450d126-98df-4846-98a1-60ad879d2db1" (UID: "f450d126-98df-4846-98a1-60ad879d2db1"). InnerVolumeSpecName "kube-api-access-vf7wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.325425 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf7wl\" (UniqueName: \"kubernetes.io/projected/f450d126-98df-4846-98a1-60ad879d2db1-kube-api-access-vf7wl\") on node \"crc\" DevicePath \"\"" Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.345875 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f450d126-98df-4846-98a1-60ad879d2db1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f450d126-98df-4846-98a1-60ad879d2db1" (UID: "f450d126-98df-4846-98a1-60ad879d2db1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.429013 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f450d126-98df-4846-98a1-60ad879d2db1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.477876 4718 generic.go:334] "Generic (PLEG): container finished" podID="f450d126-98df-4846-98a1-60ad879d2db1" containerID="6ee0aa3bb509a0670f343d3d3fb11226b127684357ef27b61ee19d57070091ae" exitCode=0 Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.477926 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97b6n" event={"ID":"f450d126-98df-4846-98a1-60ad879d2db1","Type":"ContainerDied","Data":"6ee0aa3bb509a0670f343d3d3fb11226b127684357ef27b61ee19d57070091ae"} Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.477960 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97b6n" event={"ID":"f450d126-98df-4846-98a1-60ad879d2db1","Type":"ContainerDied","Data":"234cff1b1c198a5fcec05759e1cbc4a968b3152f7cedb41c3ef3dd8d535eb0f5"} Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.477981 4718 scope.go:117] "RemoveContainer" containerID="6ee0aa3bb509a0670f343d3d3fb11226b127684357ef27b61ee19d57070091ae" Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.478000 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97b6n" Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.523681 4718 scope.go:117] "RemoveContainer" containerID="e24b965031138578e20c3ac57480b84b4e6562bb026c3aa6fa19b6085a1e2abd" Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.534793 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-97b6n"] Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.546114 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-97b6n"] Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.553475 4718 scope.go:117] "RemoveContainer" containerID="4df1ba85513f8e381b6d905ffe195364f7e35f26d3e1f490293ee7bc6c8b5643" Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.641954 4718 scope.go:117] "RemoveContainer" containerID="6ee0aa3bb509a0670f343d3d3fb11226b127684357ef27b61ee19d57070091ae" Jan 23 17:28:32 crc kubenswrapper[4718]: E0123 17:28:32.643521 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ee0aa3bb509a0670f343d3d3fb11226b127684357ef27b61ee19d57070091ae\": container with ID starting with 6ee0aa3bb509a0670f343d3d3fb11226b127684357ef27b61ee19d57070091ae not found: ID does not exist" containerID="6ee0aa3bb509a0670f343d3d3fb11226b127684357ef27b61ee19d57070091ae" Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.643568 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee0aa3bb509a0670f343d3d3fb11226b127684357ef27b61ee19d57070091ae"} err="failed to get container status \"6ee0aa3bb509a0670f343d3d3fb11226b127684357ef27b61ee19d57070091ae\": rpc error: code = NotFound desc = could not find container \"6ee0aa3bb509a0670f343d3d3fb11226b127684357ef27b61ee19d57070091ae\": container with ID starting with 6ee0aa3bb509a0670f343d3d3fb11226b127684357ef27b61ee19d57070091ae not found: ID does not exist" Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.643597 4718 scope.go:117] "RemoveContainer" containerID="e24b965031138578e20c3ac57480b84b4e6562bb026c3aa6fa19b6085a1e2abd" Jan 23 17:28:32 crc kubenswrapper[4718]: E0123 17:28:32.644366 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e24b965031138578e20c3ac57480b84b4e6562bb026c3aa6fa19b6085a1e2abd\": container with ID starting with e24b965031138578e20c3ac57480b84b4e6562bb026c3aa6fa19b6085a1e2abd not found: ID does not exist" containerID="e24b965031138578e20c3ac57480b84b4e6562bb026c3aa6fa19b6085a1e2abd" Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.644502 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e24b965031138578e20c3ac57480b84b4e6562bb026c3aa6fa19b6085a1e2abd"} err="failed to get container status \"e24b965031138578e20c3ac57480b84b4e6562bb026c3aa6fa19b6085a1e2abd\": rpc error: code = NotFound desc = could not find container \"e24b965031138578e20c3ac57480b84b4e6562bb026c3aa6fa19b6085a1e2abd\": container with ID starting with e24b965031138578e20c3ac57480b84b4e6562bb026c3aa6fa19b6085a1e2abd not found: ID does not exist" Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.644611 4718 scope.go:117] "RemoveContainer" containerID="4df1ba85513f8e381b6d905ffe195364f7e35f26d3e1f490293ee7bc6c8b5643" Jan 23 17:28:32 crc kubenswrapper[4718]: E0123 17:28:32.645487 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4df1ba85513f8e381b6d905ffe195364f7e35f26d3e1f490293ee7bc6c8b5643\": container with ID starting with 4df1ba85513f8e381b6d905ffe195364f7e35f26d3e1f490293ee7bc6c8b5643 not found: ID does not exist" containerID="4df1ba85513f8e381b6d905ffe195364f7e35f26d3e1f490293ee7bc6c8b5643" Jan 23 17:28:32 crc kubenswrapper[4718]: I0123 17:28:32.645524 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df1ba85513f8e381b6d905ffe195364f7e35f26d3e1f490293ee7bc6c8b5643"} err="failed to get container status \"4df1ba85513f8e381b6d905ffe195364f7e35f26d3e1f490293ee7bc6c8b5643\": rpc error: code = NotFound desc = could not find container \"4df1ba85513f8e381b6d905ffe195364f7e35f26d3e1f490293ee7bc6c8b5643\": container with ID starting with 4df1ba85513f8e381b6d905ffe195364f7e35f26d3e1f490293ee7bc6c8b5643 not found: ID does not exist" Jan 23 17:28:33 crc kubenswrapper[4718]: I0123 17:28:33.160497 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f450d126-98df-4846-98a1-60ad879d2db1" path="/var/lib/kubelet/pods/f450d126-98df-4846-98a1-60ad879d2db1/volumes" Jan 23 17:28:58 crc kubenswrapper[4718]: I0123 17:28:58.875873 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:28:58 crc kubenswrapper[4718]: I0123 17:28:58.876457 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:29:28 crc kubenswrapper[4718]: I0123 17:29:28.876161 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:29:28 crc kubenswrapper[4718]: I0123 17:29:28.877355 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:29:28 crc kubenswrapper[4718]: I0123 17:29:28.877428 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 17:29:28 crc kubenswrapper[4718]: I0123 17:29:28.878335 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"254eee1580cf0052b6d751c078df48be914c1db972e3c4b13c83302f29b68574"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:29:28 crc kubenswrapper[4718]: I0123 17:29:28.878395 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://254eee1580cf0052b6d751c078df48be914c1db972e3c4b13c83302f29b68574" gracePeriod=600 Jan 23 17:29:29 crc kubenswrapper[4718]: I0123 17:29:29.164781 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="254eee1580cf0052b6d751c078df48be914c1db972e3c4b13c83302f29b68574" exitCode=0 Jan 23 17:29:29 crc kubenswrapper[4718]: I0123 17:29:29.164860 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"254eee1580cf0052b6d751c078df48be914c1db972e3c4b13c83302f29b68574"} Jan 23 17:29:29 crc kubenswrapper[4718]: I0123 17:29:29.165331 4718 scope.go:117] "RemoveContainer" containerID="9fb8bf80a1dbec56bff1c3d48fbb9a6874d0a6151a0b59b3937e34878c4c02bc" Jan 23 17:29:30 crc kubenswrapper[4718]: I0123 17:29:30.178738 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84"} Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.173903 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9"] Jan 23 17:30:00 crc kubenswrapper[4718]: E0123 17:30:00.175009 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a58bac-32c3-49cc-b942-f1c035705bd8" containerName="extract-utilities" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.175026 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a58bac-32c3-49cc-b942-f1c035705bd8" containerName="extract-utilities" Jan 23 17:30:00 crc kubenswrapper[4718]: E0123 17:30:00.175044 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f450d126-98df-4846-98a1-60ad879d2db1" containerName="extract-content" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.175049 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f450d126-98df-4846-98a1-60ad879d2db1" containerName="extract-content" Jan 23 17:30:00 crc kubenswrapper[4718]: E0123 17:30:00.175072 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f450d126-98df-4846-98a1-60ad879d2db1" containerName="extract-utilities" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.175078 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f450d126-98df-4846-98a1-60ad879d2db1" containerName="extract-utilities" Jan 23 17:30:00 crc kubenswrapper[4718]: E0123 17:30:00.175100 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a58bac-32c3-49cc-b942-f1c035705bd8" containerName="extract-content" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.175105 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a58bac-32c3-49cc-b942-f1c035705bd8" containerName="extract-content" Jan 23 17:30:00 crc kubenswrapper[4718]: E0123 17:30:00.175115 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a58bac-32c3-49cc-b942-f1c035705bd8" containerName="registry-server" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.175122 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a58bac-32c3-49cc-b942-f1c035705bd8" containerName="registry-server" Jan 23 17:30:00 crc kubenswrapper[4718]: E0123 17:30:00.175138 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f450d126-98df-4846-98a1-60ad879d2db1" containerName="registry-server" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.175143 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f450d126-98df-4846-98a1-60ad879d2db1" containerName="registry-server" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.175373 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f450d126-98df-4846-98a1-60ad879d2db1" containerName="registry-server" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.175387 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1a58bac-32c3-49cc-b942-f1c035705bd8" containerName="registry-server" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.178551 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.180407 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.182269 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.186718 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9"] Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.301029 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgxj9\" (UniqueName: \"kubernetes.io/projected/af2c9b22-a18d-445d-a59c-1b4daf0f0977-kube-api-access-xgxj9\") pod \"collect-profiles-29486490-bzsv9\" (UID: \"af2c9b22-a18d-445d-a59c-1b4daf0f0977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.301464 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af2c9b22-a18d-445d-a59c-1b4daf0f0977-secret-volume\") pod \"collect-profiles-29486490-bzsv9\" (UID: \"af2c9b22-a18d-445d-a59c-1b4daf0f0977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.302568 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af2c9b22-a18d-445d-a59c-1b4daf0f0977-config-volume\") pod \"collect-profiles-29486490-bzsv9\" (UID: \"af2c9b22-a18d-445d-a59c-1b4daf0f0977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.404391 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af2c9b22-a18d-445d-a59c-1b4daf0f0977-config-volume\") pod \"collect-profiles-29486490-bzsv9\" (UID: \"af2c9b22-a18d-445d-a59c-1b4daf0f0977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.404502 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgxj9\" (UniqueName: \"kubernetes.io/projected/af2c9b22-a18d-445d-a59c-1b4daf0f0977-kube-api-access-xgxj9\") pod \"collect-profiles-29486490-bzsv9\" (UID: \"af2c9b22-a18d-445d-a59c-1b4daf0f0977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.404646 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af2c9b22-a18d-445d-a59c-1b4daf0f0977-secret-volume\") pod \"collect-profiles-29486490-bzsv9\" (UID: \"af2c9b22-a18d-445d-a59c-1b4daf0f0977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.405670 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af2c9b22-a18d-445d-a59c-1b4daf0f0977-config-volume\") pod \"collect-profiles-29486490-bzsv9\" (UID: \"af2c9b22-a18d-445d-a59c-1b4daf0f0977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.412615 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af2c9b22-a18d-445d-a59c-1b4daf0f0977-secret-volume\") pod \"collect-profiles-29486490-bzsv9\" (UID: \"af2c9b22-a18d-445d-a59c-1b4daf0f0977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.427570 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgxj9\" (UniqueName: \"kubernetes.io/projected/af2c9b22-a18d-445d-a59c-1b4daf0f0977-kube-api-access-xgxj9\") pod \"collect-profiles-29486490-bzsv9\" (UID: \"af2c9b22-a18d-445d-a59c-1b4daf0f0977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9" Jan 23 17:30:00 crc kubenswrapper[4718]: I0123 17:30:00.498777 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9" Jan 23 17:30:01 crc kubenswrapper[4718]: I0123 17:30:00.998203 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9"] Jan 23 17:30:01 crc kubenswrapper[4718]: I0123 17:30:01.528604 4718 generic.go:334] "Generic (PLEG): container finished" podID="af2c9b22-a18d-445d-a59c-1b4daf0f0977" containerID="bceeb28e6af31116828c0d7d9225ffb209d11f1a61f93ffca4fb197681e0f94b" exitCode=0 Jan 23 17:30:01 crc kubenswrapper[4718]: I0123 17:30:01.528674 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9" event={"ID":"af2c9b22-a18d-445d-a59c-1b4daf0f0977","Type":"ContainerDied","Data":"bceeb28e6af31116828c0d7d9225ffb209d11f1a61f93ffca4fb197681e0f94b"} Jan 23 17:30:01 crc kubenswrapper[4718]: I0123 17:30:01.528699 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9" event={"ID":"af2c9b22-a18d-445d-a59c-1b4daf0f0977","Type":"ContainerStarted","Data":"f3bd1ad65241b811488a0d63193897b3cd73cc19e207a68bd53f109b57ed3d53"} Jan 23 17:30:03 crc kubenswrapper[4718]: I0123 17:30:03.052810 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9" Jan 23 17:30:03 crc kubenswrapper[4718]: I0123 17:30:03.194972 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af2c9b22-a18d-445d-a59c-1b4daf0f0977-secret-volume\") pod \"af2c9b22-a18d-445d-a59c-1b4daf0f0977\" (UID: \"af2c9b22-a18d-445d-a59c-1b4daf0f0977\") " Jan 23 17:30:03 crc kubenswrapper[4718]: I0123 17:30:03.195229 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgxj9\" (UniqueName: \"kubernetes.io/projected/af2c9b22-a18d-445d-a59c-1b4daf0f0977-kube-api-access-xgxj9\") pod \"af2c9b22-a18d-445d-a59c-1b4daf0f0977\" (UID: \"af2c9b22-a18d-445d-a59c-1b4daf0f0977\") " Jan 23 17:30:03 crc kubenswrapper[4718]: I0123 17:30:03.195254 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af2c9b22-a18d-445d-a59c-1b4daf0f0977-config-volume\") pod \"af2c9b22-a18d-445d-a59c-1b4daf0f0977\" (UID: \"af2c9b22-a18d-445d-a59c-1b4daf0f0977\") " Jan 23 17:30:03 crc kubenswrapper[4718]: I0123 17:30:03.196320 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af2c9b22-a18d-445d-a59c-1b4daf0f0977-config-volume" (OuterVolumeSpecName: "config-volume") pod "af2c9b22-a18d-445d-a59c-1b4daf0f0977" (UID: "af2c9b22-a18d-445d-a59c-1b4daf0f0977"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:30:03 crc kubenswrapper[4718]: I0123 17:30:03.201766 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af2c9b22-a18d-445d-a59c-1b4daf0f0977-kube-api-access-xgxj9" (OuterVolumeSpecName: "kube-api-access-xgxj9") pod "af2c9b22-a18d-445d-a59c-1b4daf0f0977" (UID: "af2c9b22-a18d-445d-a59c-1b4daf0f0977"). InnerVolumeSpecName "kube-api-access-xgxj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:30:03 crc kubenswrapper[4718]: I0123 17:30:03.202948 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af2c9b22-a18d-445d-a59c-1b4daf0f0977-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "af2c9b22-a18d-445d-a59c-1b4daf0f0977" (UID: "af2c9b22-a18d-445d-a59c-1b4daf0f0977"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:30:03 crc kubenswrapper[4718]: I0123 17:30:03.298275 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgxj9\" (UniqueName: \"kubernetes.io/projected/af2c9b22-a18d-445d-a59c-1b4daf0f0977-kube-api-access-xgxj9\") on node \"crc\" DevicePath \"\"" Jan 23 17:30:03 crc kubenswrapper[4718]: I0123 17:30:03.298301 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af2c9b22-a18d-445d-a59c-1b4daf0f0977-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 17:30:03 crc kubenswrapper[4718]: I0123 17:30:03.298312 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af2c9b22-a18d-445d-a59c-1b4daf0f0977-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 17:30:03 crc kubenswrapper[4718]: I0123 17:30:03.557842 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9" event={"ID":"af2c9b22-a18d-445d-a59c-1b4daf0f0977","Type":"ContainerDied","Data":"f3bd1ad65241b811488a0d63193897b3cd73cc19e207a68bd53f109b57ed3d53"} Jan 23 17:30:03 crc kubenswrapper[4718]: I0123 17:30:03.558128 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3bd1ad65241b811488a0d63193897b3cd73cc19e207a68bd53f109b57ed3d53" Jan 23 17:30:03 crc kubenswrapper[4718]: I0123 17:30:03.558184 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9" Jan 23 17:30:04 crc kubenswrapper[4718]: I0123 17:30:04.149499 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk"] Jan 23 17:30:04 crc kubenswrapper[4718]: I0123 17:30:04.166071 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486445-qs2wk"] Jan 23 17:30:05 crc kubenswrapper[4718]: I0123 17:30:05.156030 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4e4a006-cff5-4912-87d0-89623f70d934" path="/var/lib/kubelet/pods/a4e4a006-cff5-4912-87d0-89623f70d934/volumes" Jan 23 17:31:03 crc kubenswrapper[4718]: I0123 17:31:03.137624 4718 scope.go:117] "RemoveContainer" containerID="a86e36d6ef8cb83be478bda28568913dd980d79cce0f31440c1ff18e52bc8b4f" Jan 23 17:31:58 crc kubenswrapper[4718]: I0123 17:31:58.875378 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:31:58 crc kubenswrapper[4718]: I0123 17:31:58.877246 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:32:28 crc kubenswrapper[4718]: I0123 17:32:28.876723 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:32:28 crc kubenswrapper[4718]: I0123 17:32:28.877353 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:32:58 crc kubenswrapper[4718]: I0123 17:32:58.875402 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:32:58 crc kubenswrapper[4718]: I0123 17:32:58.876205 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:32:58 crc kubenswrapper[4718]: I0123 17:32:58.876264 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 17:32:58 crc kubenswrapper[4718]: I0123 17:32:58.877063 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:32:58 crc kubenswrapper[4718]: I0123 17:32:58.877177 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" gracePeriod=600 Jan 23 17:32:59 crc kubenswrapper[4718]: E0123 17:32:59.003359 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:32:59 crc kubenswrapper[4718]: I0123 17:32:59.455333 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" exitCode=0 Jan 23 17:32:59 crc kubenswrapper[4718]: I0123 17:32:59.455395 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84"} Jan 23 17:32:59 crc kubenswrapper[4718]: I0123 17:32:59.456005 4718 scope.go:117] "RemoveContainer" containerID="254eee1580cf0052b6d751c078df48be914c1db972e3c4b13c83302f29b68574" Jan 23 17:32:59 crc kubenswrapper[4718]: I0123 17:32:59.456723 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:32:59 crc kubenswrapper[4718]: E0123 17:32:59.457052 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:33:10 crc kubenswrapper[4718]: I0123 17:33:10.141124 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:33:10 crc kubenswrapper[4718]: E0123 17:33:10.142195 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:33:23 crc kubenswrapper[4718]: I0123 17:33:23.140762 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:33:23 crc kubenswrapper[4718]: E0123 17:33:23.141609 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:33:35 crc kubenswrapper[4718]: I0123 17:33:35.140108 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:33:35 crc kubenswrapper[4718]: E0123 17:33:35.141154 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:33:40 crc kubenswrapper[4718]: I0123 17:33:40.142403 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4c7w4"] Jan 23 17:33:40 crc kubenswrapper[4718]: E0123 17:33:40.143350 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af2c9b22-a18d-445d-a59c-1b4daf0f0977" containerName="collect-profiles" Jan 23 17:33:40 crc kubenswrapper[4718]: I0123 17:33:40.143362 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="af2c9b22-a18d-445d-a59c-1b4daf0f0977" containerName="collect-profiles" Jan 23 17:33:40 crc kubenswrapper[4718]: I0123 17:33:40.143620 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="af2c9b22-a18d-445d-a59c-1b4daf0f0977" containerName="collect-profiles" Jan 23 17:33:40 crc kubenswrapper[4718]: I0123 17:33:40.145357 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4c7w4" Jan 23 17:33:40 crc kubenswrapper[4718]: I0123 17:33:40.165832 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4c7w4"] Jan 23 17:33:40 crc kubenswrapper[4718]: I0123 17:33:40.241127 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e783be3d-9f08-47e6-be9d-6ae7086aa1ac-utilities\") pod \"community-operators-4c7w4\" (UID: \"e783be3d-9f08-47e6-be9d-6ae7086aa1ac\") " pod="openshift-marketplace/community-operators-4c7w4" Jan 23 17:33:40 crc kubenswrapper[4718]: I0123 17:33:40.241196 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvtj6\" (UniqueName: \"kubernetes.io/projected/e783be3d-9f08-47e6-be9d-6ae7086aa1ac-kube-api-access-vvtj6\") pod \"community-operators-4c7w4\" (UID: \"e783be3d-9f08-47e6-be9d-6ae7086aa1ac\") " pod="openshift-marketplace/community-operators-4c7w4" Jan 23 17:33:40 crc kubenswrapper[4718]: I0123 17:33:40.241487 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e783be3d-9f08-47e6-be9d-6ae7086aa1ac-catalog-content\") pod \"community-operators-4c7w4\" (UID: \"e783be3d-9f08-47e6-be9d-6ae7086aa1ac\") " pod="openshift-marketplace/community-operators-4c7w4" Jan 23 17:33:40 crc kubenswrapper[4718]: I0123 17:33:40.344567 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e783be3d-9f08-47e6-be9d-6ae7086aa1ac-utilities\") pod \"community-operators-4c7w4\" (UID: \"e783be3d-9f08-47e6-be9d-6ae7086aa1ac\") " pod="openshift-marketplace/community-operators-4c7w4" Jan 23 17:33:40 crc kubenswrapper[4718]: I0123 17:33:40.344872 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtj6\" (UniqueName: \"kubernetes.io/projected/e783be3d-9f08-47e6-be9d-6ae7086aa1ac-kube-api-access-vvtj6\") pod \"community-operators-4c7w4\" (UID: \"e783be3d-9f08-47e6-be9d-6ae7086aa1ac\") " pod="openshift-marketplace/community-operators-4c7w4" Jan 23 17:33:40 crc kubenswrapper[4718]: I0123 17:33:40.345020 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e783be3d-9f08-47e6-be9d-6ae7086aa1ac-catalog-content\") pod \"community-operators-4c7w4\" (UID: \"e783be3d-9f08-47e6-be9d-6ae7086aa1ac\") " pod="openshift-marketplace/community-operators-4c7w4" Jan 23 17:33:40 crc kubenswrapper[4718]: I0123 17:33:40.345069 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e783be3d-9f08-47e6-be9d-6ae7086aa1ac-utilities\") pod \"community-operators-4c7w4\" (UID: \"e783be3d-9f08-47e6-be9d-6ae7086aa1ac\") " pod="openshift-marketplace/community-operators-4c7w4" Jan 23 17:33:40 crc kubenswrapper[4718]: I0123 17:33:40.345464 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e783be3d-9f08-47e6-be9d-6ae7086aa1ac-catalog-content\") pod \"community-operators-4c7w4\" (UID: \"e783be3d-9f08-47e6-be9d-6ae7086aa1ac\") " pod="openshift-marketplace/community-operators-4c7w4" Jan 23 17:33:40 crc kubenswrapper[4718]: I0123 17:33:40.366570 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvtj6\" (UniqueName: \"kubernetes.io/projected/e783be3d-9f08-47e6-be9d-6ae7086aa1ac-kube-api-access-vvtj6\") pod \"community-operators-4c7w4\" (UID: \"e783be3d-9f08-47e6-be9d-6ae7086aa1ac\") " pod="openshift-marketplace/community-operators-4c7w4" Jan 23 17:33:40 crc kubenswrapper[4718]: I0123 17:33:40.472173 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4c7w4" Jan 23 17:33:41 crc kubenswrapper[4718]: I0123 17:33:41.173198 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4c7w4"] Jan 23 17:33:41 crc kubenswrapper[4718]: I0123 17:33:41.918828 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c7w4" event={"ID":"e783be3d-9f08-47e6-be9d-6ae7086aa1ac","Type":"ContainerStarted","Data":"97a2ca765d3d56f9adffe397a864fe38a4173ed635474bda2b76e3f023e67a77"} Jan 23 17:33:42 crc kubenswrapper[4718]: I0123 17:33:42.936667 4718 generic.go:334] "Generic (PLEG): container finished" podID="e783be3d-9f08-47e6-be9d-6ae7086aa1ac" containerID="c7fe1fa65894b8493cc2ad39afccb7435f6b1bab68ce56a7dafb5725fde68af6" exitCode=0 Jan 23 17:33:42 crc kubenswrapper[4718]: I0123 17:33:42.936850 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c7w4" event={"ID":"e783be3d-9f08-47e6-be9d-6ae7086aa1ac","Type":"ContainerDied","Data":"c7fe1fa65894b8493cc2ad39afccb7435f6b1bab68ce56a7dafb5725fde68af6"} Jan 23 17:33:42 crc kubenswrapper[4718]: I0123 17:33:42.941035 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:33:44 crc kubenswrapper[4718]: I0123 17:33:44.978285 4718 generic.go:334] "Generic (PLEG): container finished" podID="e783be3d-9f08-47e6-be9d-6ae7086aa1ac" containerID="4a4452f26fd6d7a0a05947889994bf8be7d4bbd2f2d244c79d2e7e58c236d774" exitCode=0 Jan 23 17:33:44 crc kubenswrapper[4718]: I0123 17:33:44.978620 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c7w4" event={"ID":"e783be3d-9f08-47e6-be9d-6ae7086aa1ac","Type":"ContainerDied","Data":"4a4452f26fd6d7a0a05947889994bf8be7d4bbd2f2d244c79d2e7e58c236d774"} Jan 23 17:33:45 crc kubenswrapper[4718]: I0123 17:33:45.997215 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c7w4" event={"ID":"e783be3d-9f08-47e6-be9d-6ae7086aa1ac","Type":"ContainerStarted","Data":"0995edc4c5c78ca1f9172e84b46ce8475af551ef60b7dd0bc3e7aa61891b61fb"} Jan 23 17:33:46 crc kubenswrapper[4718]: I0123 17:33:46.038245 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4c7w4" podStartSLOduration=3.5852876609999997 podStartE2EDuration="6.038212752s" podCreationTimestamp="2026-01-23 17:33:40 +0000 UTC" firstStartedPulling="2026-01-23 17:33:42.940799041 +0000 UTC m=+4624.088041032" lastFinishedPulling="2026-01-23 17:33:45.393724132 +0000 UTC m=+4626.540966123" observedRunningTime="2026-01-23 17:33:46.024095258 +0000 UTC m=+4627.171337249" watchObservedRunningTime="2026-01-23 17:33:46.038212752 +0000 UTC m=+4627.185454743" Jan 23 17:33:47 crc kubenswrapper[4718]: I0123 17:33:47.140815 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:33:47 crc kubenswrapper[4718]: E0123 17:33:47.141496 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:33:50 crc kubenswrapper[4718]: I0123 17:33:50.472489 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4c7w4" Jan 23 17:33:50 crc kubenswrapper[4718]: I0123 17:33:50.473051 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4c7w4" Jan 23 17:33:50 crc kubenswrapper[4718]: I0123 17:33:50.544794 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4c7w4" Jan 23 17:33:51 crc kubenswrapper[4718]: I0123 17:33:51.684458 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4c7w4" Jan 23 17:33:51 crc kubenswrapper[4718]: I0123 17:33:51.745708 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4c7w4"] Jan 23 17:33:53 crc kubenswrapper[4718]: I0123 17:33:53.072493 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4c7w4" podUID="e783be3d-9f08-47e6-be9d-6ae7086aa1ac" containerName="registry-server" containerID="cri-o://0995edc4c5c78ca1f9172e84b46ce8475af551ef60b7dd0bc3e7aa61891b61fb" gracePeriod=2 Jan 23 17:33:54 crc kubenswrapper[4718]: I0123 17:33:54.097621 4718 generic.go:334] "Generic (PLEG): container finished" podID="e783be3d-9f08-47e6-be9d-6ae7086aa1ac" containerID="0995edc4c5c78ca1f9172e84b46ce8475af551ef60b7dd0bc3e7aa61891b61fb" exitCode=0 Jan 23 17:33:54 crc kubenswrapper[4718]: I0123 17:33:54.097665 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c7w4" event={"ID":"e783be3d-9f08-47e6-be9d-6ae7086aa1ac","Type":"ContainerDied","Data":"0995edc4c5c78ca1f9172e84b46ce8475af551ef60b7dd0bc3e7aa61891b61fb"} Jan 23 17:33:54 crc kubenswrapper[4718]: I0123 17:33:54.369184 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4c7w4" Jan 23 17:33:54 crc kubenswrapper[4718]: I0123 17:33:54.521734 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvtj6\" (UniqueName: \"kubernetes.io/projected/e783be3d-9f08-47e6-be9d-6ae7086aa1ac-kube-api-access-vvtj6\") pod \"e783be3d-9f08-47e6-be9d-6ae7086aa1ac\" (UID: \"e783be3d-9f08-47e6-be9d-6ae7086aa1ac\") " Jan 23 17:33:54 crc kubenswrapper[4718]: I0123 17:33:54.522257 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e783be3d-9f08-47e6-be9d-6ae7086aa1ac-catalog-content\") pod \"e783be3d-9f08-47e6-be9d-6ae7086aa1ac\" (UID: \"e783be3d-9f08-47e6-be9d-6ae7086aa1ac\") " Jan 23 17:33:54 crc kubenswrapper[4718]: I0123 17:33:54.522285 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e783be3d-9f08-47e6-be9d-6ae7086aa1ac-utilities\") pod \"e783be3d-9f08-47e6-be9d-6ae7086aa1ac\" (UID: \"e783be3d-9f08-47e6-be9d-6ae7086aa1ac\") " Jan 23 17:33:54 crc kubenswrapper[4718]: I0123 17:33:54.523712 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e783be3d-9f08-47e6-be9d-6ae7086aa1ac-utilities" (OuterVolumeSpecName: "utilities") pod "e783be3d-9f08-47e6-be9d-6ae7086aa1ac" (UID: "e783be3d-9f08-47e6-be9d-6ae7086aa1ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:33:54 crc kubenswrapper[4718]: I0123 17:33:54.528828 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e783be3d-9f08-47e6-be9d-6ae7086aa1ac-kube-api-access-vvtj6" (OuterVolumeSpecName: "kube-api-access-vvtj6") pod "e783be3d-9f08-47e6-be9d-6ae7086aa1ac" (UID: "e783be3d-9f08-47e6-be9d-6ae7086aa1ac"). InnerVolumeSpecName "kube-api-access-vvtj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:33:54 crc kubenswrapper[4718]: I0123 17:33:54.593193 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e783be3d-9f08-47e6-be9d-6ae7086aa1ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e783be3d-9f08-47e6-be9d-6ae7086aa1ac" (UID: "e783be3d-9f08-47e6-be9d-6ae7086aa1ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:33:54 crc kubenswrapper[4718]: I0123 17:33:54.624819 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e783be3d-9f08-47e6-be9d-6ae7086aa1ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:33:54 crc kubenswrapper[4718]: I0123 17:33:54.624868 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e783be3d-9f08-47e6-be9d-6ae7086aa1ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:33:54 crc kubenswrapper[4718]: I0123 17:33:54.624881 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvtj6\" (UniqueName: \"kubernetes.io/projected/e783be3d-9f08-47e6-be9d-6ae7086aa1ac-kube-api-access-vvtj6\") on node \"crc\" DevicePath \"\"" Jan 23 17:33:55 crc kubenswrapper[4718]: I0123 17:33:55.111217 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c7w4" event={"ID":"e783be3d-9f08-47e6-be9d-6ae7086aa1ac","Type":"ContainerDied","Data":"97a2ca765d3d56f9adffe397a864fe38a4173ed635474bda2b76e3f023e67a77"} Jan 23 17:33:55 crc kubenswrapper[4718]: I0123 17:33:55.111298 4718 scope.go:117] "RemoveContainer" containerID="0995edc4c5c78ca1f9172e84b46ce8475af551ef60b7dd0bc3e7aa61891b61fb" Jan 23 17:33:55 crc kubenswrapper[4718]: I0123 17:33:55.111319 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4c7w4" Jan 23 17:33:55 crc kubenswrapper[4718]: I0123 17:33:55.147894 4718 scope.go:117] "RemoveContainer" containerID="4a4452f26fd6d7a0a05947889994bf8be7d4bbd2f2d244c79d2e7e58c236d774" Jan 23 17:33:55 crc kubenswrapper[4718]: I0123 17:33:55.157657 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4c7w4"] Jan 23 17:33:55 crc kubenswrapper[4718]: I0123 17:33:55.161236 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4c7w4"] Jan 23 17:33:55 crc kubenswrapper[4718]: I0123 17:33:55.175260 4718 scope.go:117] "RemoveContainer" containerID="c7fe1fa65894b8493cc2ad39afccb7435f6b1bab68ce56a7dafb5725fde68af6" Jan 23 17:33:57 crc kubenswrapper[4718]: I0123 17:33:57.153868 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e783be3d-9f08-47e6-be9d-6ae7086aa1ac" path="/var/lib/kubelet/pods/e783be3d-9f08-47e6-be9d-6ae7086aa1ac/volumes" Jan 23 17:33:58 crc kubenswrapper[4718]: I0123 17:33:58.141015 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:33:58 crc kubenswrapper[4718]: E0123 17:33:58.141717 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:34:11 crc kubenswrapper[4718]: I0123 17:34:11.140453 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:34:11 crc kubenswrapper[4718]: E0123 17:34:11.141608 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:34:26 crc kubenswrapper[4718]: I0123 17:34:26.141578 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:34:26 crc kubenswrapper[4718]: E0123 17:34:26.142811 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:34:39 crc kubenswrapper[4718]: I0123 17:34:39.149735 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:34:39 crc kubenswrapper[4718]: E0123 17:34:39.150692 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:34:50 crc kubenswrapper[4718]: I0123 17:34:50.140792 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:34:50 crc kubenswrapper[4718]: E0123 17:34:50.141645 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:35:02 crc kubenswrapper[4718]: I0123 17:35:02.140834 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:35:02 crc kubenswrapper[4718]: E0123 17:35:02.141962 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:35:15 crc kubenswrapper[4718]: I0123 17:35:15.140830 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:35:15 crc kubenswrapper[4718]: E0123 17:35:15.141927 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:35:24 crc kubenswrapper[4718]: I0123 17:35:24.943961 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 17:35:24 crc kubenswrapper[4718]: E0123 17:35:24.944885 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e783be3d-9f08-47e6-be9d-6ae7086aa1ac" containerName="extract-utilities" Jan 23 17:35:24 crc kubenswrapper[4718]: I0123 17:35:24.944899 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e783be3d-9f08-47e6-be9d-6ae7086aa1ac" containerName="extract-utilities" Jan 23 17:35:24 crc kubenswrapper[4718]: E0123 17:35:24.944936 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e783be3d-9f08-47e6-be9d-6ae7086aa1ac" containerName="extract-content" Jan 23 17:35:24 crc kubenswrapper[4718]: I0123 17:35:24.944942 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e783be3d-9f08-47e6-be9d-6ae7086aa1ac" containerName="extract-content" Jan 23 17:35:24 crc kubenswrapper[4718]: E0123 17:35:24.944970 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e783be3d-9f08-47e6-be9d-6ae7086aa1ac" containerName="registry-server" Jan 23 17:35:24 crc kubenswrapper[4718]: I0123 17:35:24.944976 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e783be3d-9f08-47e6-be9d-6ae7086aa1ac" containerName="registry-server" Jan 23 17:35:24 crc kubenswrapper[4718]: I0123 17:35:24.945199 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e783be3d-9f08-47e6-be9d-6ae7086aa1ac" containerName="registry-server" Jan 23 17:35:24 crc kubenswrapper[4718]: I0123 17:35:24.946287 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 17:35:24 crc kubenswrapper[4718]: I0123 17:35:24.948097 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 23 17:35:24 crc kubenswrapper[4718]: I0123 17:35:24.948208 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9zl9c" Jan 23 17:35:24 crc kubenswrapper[4718]: I0123 17:35:24.949075 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 23 17:35:24 crc kubenswrapper[4718]: I0123 17:35:24.953475 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 23 17:35:24 crc kubenswrapper[4718]: I0123 17:35:24.976621 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.050548 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.050596 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/21700319-2dc7-41c4-8377-8ba6ef629cbb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.050625 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/21700319-2dc7-41c4-8377-8ba6ef629cbb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.050667 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/21700319-2dc7-41c4-8377-8ba6ef629cbb-config-data\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.050748 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bdr4\" (UniqueName: \"kubernetes.io/projected/21700319-2dc7-41c4-8377-8ba6ef629cbb-kube-api-access-9bdr4\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.050796 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/21700319-2dc7-41c4-8377-8ba6ef629cbb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.050848 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21700319-2dc7-41c4-8377-8ba6ef629cbb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.050868 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/21700319-2dc7-41c4-8377-8ba6ef629cbb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.050898 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/21700319-2dc7-41c4-8377-8ba6ef629cbb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.153668 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bdr4\" (UniqueName: \"kubernetes.io/projected/21700319-2dc7-41c4-8377-8ba6ef629cbb-kube-api-access-9bdr4\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.153752 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/21700319-2dc7-41c4-8377-8ba6ef629cbb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.153821 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21700319-2dc7-41c4-8377-8ba6ef629cbb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.153843 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/21700319-2dc7-41c4-8377-8ba6ef629cbb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.153881 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/21700319-2dc7-41c4-8377-8ba6ef629cbb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.153965 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.153983 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/21700319-2dc7-41c4-8377-8ba6ef629cbb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.154034 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/21700319-2dc7-41c4-8377-8ba6ef629cbb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.154060 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/21700319-2dc7-41c4-8377-8ba6ef629cbb-config-data\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.154235 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/21700319-2dc7-41c4-8377-8ba6ef629cbb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.154493 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/21700319-2dc7-41c4-8377-8ba6ef629cbb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.154929 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/21700319-2dc7-41c4-8377-8ba6ef629cbb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.155175 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.155316 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/21700319-2dc7-41c4-8377-8ba6ef629cbb-config-data\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.160828 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21700319-2dc7-41c4-8377-8ba6ef629cbb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.162398 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/21700319-2dc7-41c4-8377-8ba6ef629cbb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.163567 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/21700319-2dc7-41c4-8377-8ba6ef629cbb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.168827 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bdr4\" (UniqueName: \"kubernetes.io/projected/21700319-2dc7-41c4-8377-8ba6ef629cbb-kube-api-access-9bdr4\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.190639 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.270185 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 17:35:25 crc kubenswrapper[4718]: I0123 17:35:25.761619 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 17:35:26 crc kubenswrapper[4718]: I0123 17:35:26.071641 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"21700319-2dc7-41c4-8377-8ba6ef629cbb","Type":"ContainerStarted","Data":"ee56bf0d0b5cc049c974efb0df2ded82bcbb26e29f2f7908ac706bc979bdeeb9"} Jan 23 17:35:30 crc kubenswrapper[4718]: I0123 17:35:30.141053 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:35:30 crc kubenswrapper[4718]: E0123 17:35:30.142461 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:35:44 crc kubenswrapper[4718]: I0123 17:35:44.140783 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:35:44 crc kubenswrapper[4718]: E0123 17:35:44.142159 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:35:57 crc kubenswrapper[4718]: I0123 17:35:57.140263 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:35:57 crc kubenswrapper[4718]: E0123 17:35:57.141033 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:36:02 crc kubenswrapper[4718]: E0123 17:36:02.277011 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 23 17:36:02 crc kubenswrapper[4718]: E0123 17:36:02.278108 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bdr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(21700319-2dc7-41c4-8377-8ba6ef629cbb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:36:02 crc kubenswrapper[4718]: E0123 17:36:02.279543 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="21700319-2dc7-41c4-8377-8ba6ef629cbb" Jan 23 17:36:02 crc kubenswrapper[4718]: E0123 17:36:02.550659 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="21700319-2dc7-41c4-8377-8ba6ef629cbb" Jan 23 17:36:11 crc kubenswrapper[4718]: I0123 17:36:11.141203 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:36:11 crc kubenswrapper[4718]: E0123 17:36:11.142475 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:36:18 crc kubenswrapper[4718]: I0123 17:36:18.636221 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 23 17:36:20 crc kubenswrapper[4718]: I0123 17:36:20.749478 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"21700319-2dc7-41c4-8377-8ba6ef629cbb","Type":"ContainerStarted","Data":"f946573a95663ea3b2443e9e5507a3bce8a80352348f25ee91ccec54733d20be"} Jan 23 17:36:20 crc kubenswrapper[4718]: I0123 17:36:20.769797 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.8957683020000005 podStartE2EDuration="57.769780625s" podCreationTimestamp="2026-01-23 17:35:23 +0000 UTC" firstStartedPulling="2026-01-23 17:35:25.759312929 +0000 UTC m=+4726.906554920" lastFinishedPulling="2026-01-23 17:36:18.633325242 +0000 UTC m=+4779.780567243" observedRunningTime="2026-01-23 17:36:20.767580655 +0000 UTC m=+4781.914822716" watchObservedRunningTime="2026-01-23 17:36:20.769780625 +0000 UTC m=+4781.917022616" Jan 23 17:36:25 crc kubenswrapper[4718]: I0123 17:36:25.140836 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:36:25 crc kubenswrapper[4718]: E0123 17:36:25.141683 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:36:36 crc kubenswrapper[4718]: I0123 17:36:36.141057 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:36:36 crc kubenswrapper[4718]: E0123 17:36:36.142091 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:36:47 crc kubenswrapper[4718]: I0123 17:36:47.141038 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:36:47 crc kubenswrapper[4718]: E0123 17:36:47.141801 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:36:52 crc kubenswrapper[4718]: I0123 17:36:52.475939 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mtn5m"] Jan 23 17:36:52 crc kubenswrapper[4718]: I0123 17:36:52.479009 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtn5m" Jan 23 17:36:52 crc kubenswrapper[4718]: I0123 17:36:52.525450 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtn5m"] Jan 23 17:36:52 crc kubenswrapper[4718]: I0123 17:36:52.544278 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4-utilities\") pod \"redhat-marketplace-mtn5m\" (UID: \"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4\") " pod="openshift-marketplace/redhat-marketplace-mtn5m" Jan 23 17:36:52 crc kubenswrapper[4718]: I0123 17:36:52.544619 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szkfx\" (UniqueName: \"kubernetes.io/projected/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4-kube-api-access-szkfx\") pod \"redhat-marketplace-mtn5m\" (UID: \"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4\") " pod="openshift-marketplace/redhat-marketplace-mtn5m" Jan 23 17:36:52 crc kubenswrapper[4718]: I0123 17:36:52.544842 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4-catalog-content\") pod \"redhat-marketplace-mtn5m\" (UID: \"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4\") " pod="openshift-marketplace/redhat-marketplace-mtn5m" Jan 23 17:36:52 crc kubenswrapper[4718]: I0123 17:36:52.647097 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4-utilities\") pod \"redhat-marketplace-mtn5m\" (UID: \"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4\") " pod="openshift-marketplace/redhat-marketplace-mtn5m" Jan 23 17:36:52 crc kubenswrapper[4718]: I0123 17:36:52.647236 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szkfx\" (UniqueName: \"kubernetes.io/projected/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4-kube-api-access-szkfx\") pod \"redhat-marketplace-mtn5m\" (UID: \"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4\") " pod="openshift-marketplace/redhat-marketplace-mtn5m" Jan 23 17:36:52 crc kubenswrapper[4718]: I0123 17:36:52.647292 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4-catalog-content\") pod \"redhat-marketplace-mtn5m\" (UID: \"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4\") " pod="openshift-marketplace/redhat-marketplace-mtn5m" Jan 23 17:36:52 crc kubenswrapper[4718]: I0123 17:36:52.647543 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4-utilities\") pod \"redhat-marketplace-mtn5m\" (UID: \"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4\") " pod="openshift-marketplace/redhat-marketplace-mtn5m" Jan 23 17:36:52 crc kubenswrapper[4718]: I0123 17:36:52.647979 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4-catalog-content\") pod \"redhat-marketplace-mtn5m\" (UID: \"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4\") " pod="openshift-marketplace/redhat-marketplace-mtn5m" Jan 23 17:36:52 crc kubenswrapper[4718]: I0123 17:36:52.674415 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szkfx\" (UniqueName: \"kubernetes.io/projected/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4-kube-api-access-szkfx\") pod \"redhat-marketplace-mtn5m\" (UID: \"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4\") " pod="openshift-marketplace/redhat-marketplace-mtn5m" Jan 23 17:36:52 crc kubenswrapper[4718]: I0123 17:36:52.809601 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtn5m" Jan 23 17:36:53 crc kubenswrapper[4718]: I0123 17:36:53.628253 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtn5m"] Jan 23 17:36:54 crc kubenswrapper[4718]: I0123 17:36:54.098407 4718 generic.go:334] "Generic (PLEG): container finished" podID="e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4" containerID="c84d5b2b465deaa07bb0832231efffd5f115c05724f1f74dc8fbb2b51bf148a9" exitCode=0 Jan 23 17:36:54 crc kubenswrapper[4718]: I0123 17:36:54.098618 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtn5m" event={"ID":"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4","Type":"ContainerDied","Data":"c84d5b2b465deaa07bb0832231efffd5f115c05724f1f74dc8fbb2b51bf148a9"} Jan 23 17:36:54 crc kubenswrapper[4718]: I0123 17:36:54.098823 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtn5m" event={"ID":"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4","Type":"ContainerStarted","Data":"e0a9e60bc07933fac3e40015a4287adf91de79173be629271e5918ba47df590a"} Jan 23 17:36:56 crc kubenswrapper[4718]: I0123 17:36:56.132671 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtn5m" event={"ID":"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4","Type":"ContainerStarted","Data":"dd6dd7d6aa066963f967061b13081c3c930eb90a2816684496e1583cdeb742f8"} Jan 23 17:36:56 crc kubenswrapper[4718]: E0123 17:36:56.389697 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5cc2be6_11f8_4ad9_9f69_12d5c7d999d4.slice/crio-dd6dd7d6aa066963f967061b13081c3c930eb90a2816684496e1583cdeb742f8.scope\": RecentStats: unable to find data in memory cache]" Jan 23 17:36:57 crc kubenswrapper[4718]: I0123 17:36:57.144746 4718 generic.go:334] "Generic (PLEG): container finished" podID="e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4" containerID="dd6dd7d6aa066963f967061b13081c3c930eb90a2816684496e1583cdeb742f8" exitCode=0 Jan 23 17:36:57 crc kubenswrapper[4718]: I0123 17:36:57.151797 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtn5m" event={"ID":"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4","Type":"ContainerDied","Data":"dd6dd7d6aa066963f967061b13081c3c930eb90a2816684496e1583cdeb742f8"} Jan 23 17:36:58 crc kubenswrapper[4718]: I0123 17:36:58.158134 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtn5m" event={"ID":"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4","Type":"ContainerStarted","Data":"ea30803bd6112c61a78e2055ea21550687bda8b51510e22783ef812d8b9157ae"} Jan 23 17:36:58 crc kubenswrapper[4718]: I0123 17:36:58.186912 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mtn5m" podStartSLOduration=2.463662642 podStartE2EDuration="6.186891325s" podCreationTimestamp="2026-01-23 17:36:52 +0000 UTC" firstStartedPulling="2026-01-23 17:36:54.100555837 +0000 UTC m=+4815.247797828" lastFinishedPulling="2026-01-23 17:36:57.82378452 +0000 UTC m=+4818.971026511" observedRunningTime="2026-01-23 17:36:58.175293879 +0000 UTC m=+4819.322535870" watchObservedRunningTime="2026-01-23 17:36:58.186891325 +0000 UTC m=+4819.334133316" Jan 23 17:37:00 crc kubenswrapper[4718]: I0123 17:37:00.142084 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:37:00 crc kubenswrapper[4718]: E0123 17:37:00.143916 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:37:02 crc kubenswrapper[4718]: I0123 17:37:02.809838 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mtn5m" Jan 23 17:37:02 crc kubenswrapper[4718]: I0123 17:37:02.810368 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mtn5m" Jan 23 17:37:02 crc kubenswrapper[4718]: I0123 17:37:02.868190 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mtn5m" Jan 23 17:37:03 crc kubenswrapper[4718]: I0123 17:37:03.293258 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mtn5m" Jan 23 17:37:03 crc kubenswrapper[4718]: I0123 17:37:03.362186 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtn5m"] Jan 23 17:37:05 crc kubenswrapper[4718]: I0123 17:37:05.238667 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mtn5m" podUID="e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4" containerName="registry-server" containerID="cri-o://ea30803bd6112c61a78e2055ea21550687bda8b51510e22783ef812d8b9157ae" gracePeriod=2 Jan 23 17:37:05 crc kubenswrapper[4718]: I0123 17:37:05.890794 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtn5m" Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.006369 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szkfx\" (UniqueName: \"kubernetes.io/projected/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4-kube-api-access-szkfx\") pod \"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4\" (UID: \"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4\") " Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.007050 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4-utilities\") pod \"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4\" (UID: \"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4\") " Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.007134 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4-catalog-content\") pod \"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4\" (UID: \"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4\") " Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.008009 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4-utilities" (OuterVolumeSpecName: "utilities") pod "e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4" (UID: "e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.026961 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4-kube-api-access-szkfx" (OuterVolumeSpecName: "kube-api-access-szkfx") pod "e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4" (UID: "e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4"). InnerVolumeSpecName "kube-api-access-szkfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.028692 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4" (UID: "e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.112220 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szkfx\" (UniqueName: \"kubernetes.io/projected/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4-kube-api-access-szkfx\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.112726 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.112827 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.256689 4718 generic.go:334] "Generic (PLEG): container finished" podID="e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4" containerID="ea30803bd6112c61a78e2055ea21550687bda8b51510e22783ef812d8b9157ae" exitCode=0 Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.256743 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtn5m" event={"ID":"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4","Type":"ContainerDied","Data":"ea30803bd6112c61a78e2055ea21550687bda8b51510e22783ef812d8b9157ae"} Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.256777 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtn5m" event={"ID":"e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4","Type":"ContainerDied","Data":"e0a9e60bc07933fac3e40015a4287adf91de79173be629271e5918ba47df590a"} Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.256801 4718 scope.go:117] "RemoveContainer" containerID="ea30803bd6112c61a78e2055ea21550687bda8b51510e22783ef812d8b9157ae" Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.257005 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtn5m" Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.303162 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtn5m"] Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.306356 4718 scope.go:117] "RemoveContainer" containerID="dd6dd7d6aa066963f967061b13081c3c930eb90a2816684496e1583cdeb742f8" Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.316493 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtn5m"] Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.328241 4718 scope.go:117] "RemoveContainer" containerID="c84d5b2b465deaa07bb0832231efffd5f115c05724f1f74dc8fbb2b51bf148a9" Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.384455 4718 scope.go:117] "RemoveContainer" containerID="ea30803bd6112c61a78e2055ea21550687bda8b51510e22783ef812d8b9157ae" Jan 23 17:37:06 crc kubenswrapper[4718]: E0123 17:37:06.385514 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea30803bd6112c61a78e2055ea21550687bda8b51510e22783ef812d8b9157ae\": container with ID starting with ea30803bd6112c61a78e2055ea21550687bda8b51510e22783ef812d8b9157ae not found: ID does not exist" containerID="ea30803bd6112c61a78e2055ea21550687bda8b51510e22783ef812d8b9157ae" Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.385560 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea30803bd6112c61a78e2055ea21550687bda8b51510e22783ef812d8b9157ae"} err="failed to get container status \"ea30803bd6112c61a78e2055ea21550687bda8b51510e22783ef812d8b9157ae\": rpc error: code = NotFound desc = could not find container \"ea30803bd6112c61a78e2055ea21550687bda8b51510e22783ef812d8b9157ae\": container with ID starting with ea30803bd6112c61a78e2055ea21550687bda8b51510e22783ef812d8b9157ae not found: ID does not exist" Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.385589 4718 scope.go:117] "RemoveContainer" containerID="dd6dd7d6aa066963f967061b13081c3c930eb90a2816684496e1583cdeb742f8" Jan 23 17:37:06 crc kubenswrapper[4718]: E0123 17:37:06.386304 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd6dd7d6aa066963f967061b13081c3c930eb90a2816684496e1583cdeb742f8\": container with ID starting with dd6dd7d6aa066963f967061b13081c3c930eb90a2816684496e1583cdeb742f8 not found: ID does not exist" containerID="dd6dd7d6aa066963f967061b13081c3c930eb90a2816684496e1583cdeb742f8" Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.386393 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6dd7d6aa066963f967061b13081c3c930eb90a2816684496e1583cdeb742f8"} err="failed to get container status \"dd6dd7d6aa066963f967061b13081c3c930eb90a2816684496e1583cdeb742f8\": rpc error: code = NotFound desc = could not find container \"dd6dd7d6aa066963f967061b13081c3c930eb90a2816684496e1583cdeb742f8\": container with ID starting with dd6dd7d6aa066963f967061b13081c3c930eb90a2816684496e1583cdeb742f8 not found: ID does not exist" Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.386433 4718 scope.go:117] "RemoveContainer" containerID="c84d5b2b465deaa07bb0832231efffd5f115c05724f1f74dc8fbb2b51bf148a9" Jan 23 17:37:06 crc kubenswrapper[4718]: E0123 17:37:06.386971 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c84d5b2b465deaa07bb0832231efffd5f115c05724f1f74dc8fbb2b51bf148a9\": container with ID starting with c84d5b2b465deaa07bb0832231efffd5f115c05724f1f74dc8fbb2b51bf148a9 not found: ID does not exist" containerID="c84d5b2b465deaa07bb0832231efffd5f115c05724f1f74dc8fbb2b51bf148a9" Jan 23 17:37:06 crc kubenswrapper[4718]: I0123 17:37:06.387012 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c84d5b2b465deaa07bb0832231efffd5f115c05724f1f74dc8fbb2b51bf148a9"} err="failed to get container status \"c84d5b2b465deaa07bb0832231efffd5f115c05724f1f74dc8fbb2b51bf148a9\": rpc error: code = NotFound desc = could not find container \"c84d5b2b465deaa07bb0832231efffd5f115c05724f1f74dc8fbb2b51bf148a9\": container with ID starting with c84d5b2b465deaa07bb0832231efffd5f115c05724f1f74dc8fbb2b51bf148a9 not found: ID does not exist" Jan 23 17:37:07 crc kubenswrapper[4718]: I0123 17:37:07.153497 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4" path="/var/lib/kubelet/pods/e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4/volumes" Jan 23 17:37:15 crc kubenswrapper[4718]: I0123 17:37:15.141012 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:37:15 crc kubenswrapper[4718]: E0123 17:37:15.142298 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:37:30 crc kubenswrapper[4718]: I0123 17:37:30.140643 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:37:30 crc kubenswrapper[4718]: E0123 17:37:30.142217 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:37:43 crc kubenswrapper[4718]: I0123 17:37:43.141987 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:37:43 crc kubenswrapper[4718]: E0123 17:37:43.143300 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:37:58 crc kubenswrapper[4718]: I0123 17:37:58.141063 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:37:58 crc kubenswrapper[4718]: E0123 17:37:58.141898 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:38:09 crc kubenswrapper[4718]: I0123 17:38:09.148763 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:38:09 crc kubenswrapper[4718]: I0123 17:38:09.947557 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"15073422a149a95e5ce263cbfa8fa789642a6c6e12faa0a6202d2065d924068a"} Jan 23 17:39:29 crc kubenswrapper[4718]: I0123 17:39:29.708505 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jw95x"] Jan 23 17:39:29 crc kubenswrapper[4718]: E0123 17:39:29.713036 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4" containerName="extract-content" Jan 23 17:39:29 crc kubenswrapper[4718]: I0123 17:39:29.713064 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4" containerName="extract-content" Jan 23 17:39:29 crc kubenswrapper[4718]: E0123 17:39:29.713106 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4" containerName="extract-utilities" Jan 23 17:39:29 crc kubenswrapper[4718]: I0123 17:39:29.713112 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4" containerName="extract-utilities" Jan 23 17:39:29 crc kubenswrapper[4718]: E0123 17:39:29.713132 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4" containerName="registry-server" Jan 23 17:39:29 crc kubenswrapper[4718]: I0123 17:39:29.713137 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4" containerName="registry-server" Jan 23 17:39:29 crc kubenswrapper[4718]: I0123 17:39:29.713712 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5cc2be6-11f8-4ad9-9f69-12d5c7d999d4" containerName="registry-server" Jan 23 17:39:29 crc kubenswrapper[4718]: I0123 17:39:29.716620 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jw95x" Jan 23 17:39:29 crc kubenswrapper[4718]: I0123 17:39:29.754427 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jw95x"] Jan 23 17:39:29 crc kubenswrapper[4718]: I0123 17:39:29.899520 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxshk\" (UniqueName: \"kubernetes.io/projected/02543a99-0395-47e3-aeac-a42d7ddd9348-kube-api-access-wxshk\") pod \"certified-operators-jw95x\" (UID: \"02543a99-0395-47e3-aeac-a42d7ddd9348\") " pod="openshift-marketplace/certified-operators-jw95x" Jan 23 17:39:29 crc kubenswrapper[4718]: I0123 17:39:29.899600 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02543a99-0395-47e3-aeac-a42d7ddd9348-utilities\") pod \"certified-operators-jw95x\" (UID: \"02543a99-0395-47e3-aeac-a42d7ddd9348\") " pod="openshift-marketplace/certified-operators-jw95x" Jan 23 17:39:29 crc kubenswrapper[4718]: I0123 17:39:29.899709 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02543a99-0395-47e3-aeac-a42d7ddd9348-catalog-content\") pod \"certified-operators-jw95x\" (UID: \"02543a99-0395-47e3-aeac-a42d7ddd9348\") " pod="openshift-marketplace/certified-operators-jw95x" Jan 23 17:39:30 crc kubenswrapper[4718]: I0123 17:39:30.001380 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxshk\" (UniqueName: \"kubernetes.io/projected/02543a99-0395-47e3-aeac-a42d7ddd9348-kube-api-access-wxshk\") pod \"certified-operators-jw95x\" (UID: \"02543a99-0395-47e3-aeac-a42d7ddd9348\") " pod="openshift-marketplace/certified-operators-jw95x" Jan 23 17:39:30 crc kubenswrapper[4718]: I0123 17:39:30.001474 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02543a99-0395-47e3-aeac-a42d7ddd9348-utilities\") pod \"certified-operators-jw95x\" (UID: \"02543a99-0395-47e3-aeac-a42d7ddd9348\") " pod="openshift-marketplace/certified-operators-jw95x" Jan 23 17:39:30 crc kubenswrapper[4718]: I0123 17:39:30.001596 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02543a99-0395-47e3-aeac-a42d7ddd9348-catalog-content\") pod \"certified-operators-jw95x\" (UID: \"02543a99-0395-47e3-aeac-a42d7ddd9348\") " pod="openshift-marketplace/certified-operators-jw95x" Jan 23 17:39:30 crc kubenswrapper[4718]: I0123 17:39:30.003261 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02543a99-0395-47e3-aeac-a42d7ddd9348-catalog-content\") pod \"certified-operators-jw95x\" (UID: \"02543a99-0395-47e3-aeac-a42d7ddd9348\") " pod="openshift-marketplace/certified-operators-jw95x" Jan 23 17:39:30 crc kubenswrapper[4718]: I0123 17:39:30.003523 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02543a99-0395-47e3-aeac-a42d7ddd9348-utilities\") pod \"certified-operators-jw95x\" (UID: \"02543a99-0395-47e3-aeac-a42d7ddd9348\") " pod="openshift-marketplace/certified-operators-jw95x" Jan 23 17:39:30 crc kubenswrapper[4718]: I0123 17:39:30.028493 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxshk\" (UniqueName: \"kubernetes.io/projected/02543a99-0395-47e3-aeac-a42d7ddd9348-kube-api-access-wxshk\") pod \"certified-operators-jw95x\" (UID: \"02543a99-0395-47e3-aeac-a42d7ddd9348\") " pod="openshift-marketplace/certified-operators-jw95x" Jan 23 17:39:30 crc kubenswrapper[4718]: I0123 17:39:30.043836 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jw95x" Jan 23 17:39:30 crc kubenswrapper[4718]: I0123 17:39:30.991433 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jw95x"] Jan 23 17:39:31 crc kubenswrapper[4718]: I0123 17:39:31.885455 4718 generic.go:334] "Generic (PLEG): container finished" podID="02543a99-0395-47e3-aeac-a42d7ddd9348" containerID="881a7d5387fd82b46393b6dd5bbcd10734cadd2f497a429064c755e52eba68f6" exitCode=0 Jan 23 17:39:31 crc kubenswrapper[4718]: I0123 17:39:31.885567 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jw95x" event={"ID":"02543a99-0395-47e3-aeac-a42d7ddd9348","Type":"ContainerDied","Data":"881a7d5387fd82b46393b6dd5bbcd10734cadd2f497a429064c755e52eba68f6"} Jan 23 17:39:31 crc kubenswrapper[4718]: I0123 17:39:31.885889 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jw95x" event={"ID":"02543a99-0395-47e3-aeac-a42d7ddd9348","Type":"ContainerStarted","Data":"838bd85bb49dc47a7479977dbd95343501a47656ffd4e65c67f89edb0891fa03"} Jan 23 17:39:31 crc kubenswrapper[4718]: I0123 17:39:31.889235 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:39:33 crc kubenswrapper[4718]: I0123 17:39:33.913981 4718 generic.go:334] "Generic (PLEG): container finished" podID="02543a99-0395-47e3-aeac-a42d7ddd9348" containerID="1b86f24d94b3d613d665049fc8a762b418efb224036181c8fbc776bb07faab75" exitCode=0 Jan 23 17:39:33 crc kubenswrapper[4718]: I0123 17:39:33.914057 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jw95x" event={"ID":"02543a99-0395-47e3-aeac-a42d7ddd9348","Type":"ContainerDied","Data":"1b86f24d94b3d613d665049fc8a762b418efb224036181c8fbc776bb07faab75"} Jan 23 17:39:34 crc kubenswrapper[4718]: I0123 17:39:34.929994 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jw95x" event={"ID":"02543a99-0395-47e3-aeac-a42d7ddd9348","Type":"ContainerStarted","Data":"442eab51da60fd0f3940ceb37c96021c49b0f7553c66cb2f7b7e7d293ea66fea"} Jan 23 17:39:34 crc kubenswrapper[4718]: I0123 17:39:34.959565 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jw95x" podStartSLOduration=3.517996409 podStartE2EDuration="5.959105619s" podCreationTimestamp="2026-01-23 17:39:29 +0000 UTC" firstStartedPulling="2026-01-23 17:39:31.887653214 +0000 UTC m=+4973.034895205" lastFinishedPulling="2026-01-23 17:39:34.328762424 +0000 UTC m=+4975.476004415" observedRunningTime="2026-01-23 17:39:34.949271762 +0000 UTC m=+4976.096513763" watchObservedRunningTime="2026-01-23 17:39:34.959105619 +0000 UTC m=+4976.106347610" Jan 23 17:39:40 crc kubenswrapper[4718]: I0123 17:39:40.044734 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jw95x" Jan 23 17:39:40 crc kubenswrapper[4718]: I0123 17:39:40.045319 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jw95x" Jan 23 17:39:40 crc kubenswrapper[4718]: I0123 17:39:40.106555 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jw95x" Jan 23 17:39:41 crc kubenswrapper[4718]: I0123 17:39:41.052863 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jw95x" Jan 23 17:39:41 crc kubenswrapper[4718]: I0123 17:39:41.170138 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jw95x"] Jan 23 17:39:43 crc kubenswrapper[4718]: I0123 17:39:43.015502 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jw95x" podUID="02543a99-0395-47e3-aeac-a42d7ddd9348" containerName="registry-server" containerID="cri-o://442eab51da60fd0f3940ceb37c96021c49b0f7553c66cb2f7b7e7d293ea66fea" gracePeriod=2 Jan 23 17:39:43 crc kubenswrapper[4718]: I0123 17:39:43.710742 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jw95x" Jan 23 17:39:43 crc kubenswrapper[4718]: I0123 17:39:43.742536 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02543a99-0395-47e3-aeac-a42d7ddd9348-utilities\") pod \"02543a99-0395-47e3-aeac-a42d7ddd9348\" (UID: \"02543a99-0395-47e3-aeac-a42d7ddd9348\") " Jan 23 17:39:43 crc kubenswrapper[4718]: I0123 17:39:43.742618 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02543a99-0395-47e3-aeac-a42d7ddd9348-catalog-content\") pod \"02543a99-0395-47e3-aeac-a42d7ddd9348\" (UID: \"02543a99-0395-47e3-aeac-a42d7ddd9348\") " Jan 23 17:39:43 crc kubenswrapper[4718]: I0123 17:39:43.742844 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxshk\" (UniqueName: \"kubernetes.io/projected/02543a99-0395-47e3-aeac-a42d7ddd9348-kube-api-access-wxshk\") pod \"02543a99-0395-47e3-aeac-a42d7ddd9348\" (UID: \"02543a99-0395-47e3-aeac-a42d7ddd9348\") " Jan 23 17:39:43 crc kubenswrapper[4718]: I0123 17:39:43.744326 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02543a99-0395-47e3-aeac-a42d7ddd9348-utilities" (OuterVolumeSpecName: "utilities") pod "02543a99-0395-47e3-aeac-a42d7ddd9348" (UID: "02543a99-0395-47e3-aeac-a42d7ddd9348"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:39:43 crc kubenswrapper[4718]: I0123 17:39:43.766961 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02543a99-0395-47e3-aeac-a42d7ddd9348-kube-api-access-wxshk" (OuterVolumeSpecName: "kube-api-access-wxshk") pod "02543a99-0395-47e3-aeac-a42d7ddd9348" (UID: "02543a99-0395-47e3-aeac-a42d7ddd9348"). InnerVolumeSpecName "kube-api-access-wxshk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:39:43 crc kubenswrapper[4718]: I0123 17:39:43.845827 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxshk\" (UniqueName: \"kubernetes.io/projected/02543a99-0395-47e3-aeac-a42d7ddd9348-kube-api-access-wxshk\") on node \"crc\" DevicePath \"\"" Jan 23 17:39:43 crc kubenswrapper[4718]: I0123 17:39:43.845871 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02543a99-0395-47e3-aeac-a42d7ddd9348-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:39:44 crc kubenswrapper[4718]: I0123 17:39:44.027579 4718 generic.go:334] "Generic (PLEG): container finished" podID="02543a99-0395-47e3-aeac-a42d7ddd9348" containerID="442eab51da60fd0f3940ceb37c96021c49b0f7553c66cb2f7b7e7d293ea66fea" exitCode=0 Jan 23 17:39:44 crc kubenswrapper[4718]: I0123 17:39:44.027659 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jw95x" Jan 23 17:39:44 crc kubenswrapper[4718]: I0123 17:39:44.027654 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jw95x" event={"ID":"02543a99-0395-47e3-aeac-a42d7ddd9348","Type":"ContainerDied","Data":"442eab51da60fd0f3940ceb37c96021c49b0f7553c66cb2f7b7e7d293ea66fea"} Jan 23 17:39:44 crc kubenswrapper[4718]: I0123 17:39:44.027722 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jw95x" event={"ID":"02543a99-0395-47e3-aeac-a42d7ddd9348","Type":"ContainerDied","Data":"838bd85bb49dc47a7479977dbd95343501a47656ffd4e65c67f89edb0891fa03"} Jan 23 17:39:44 crc kubenswrapper[4718]: I0123 17:39:44.027743 4718 scope.go:117] "RemoveContainer" containerID="442eab51da60fd0f3940ceb37c96021c49b0f7553c66cb2f7b7e7d293ea66fea" Jan 23 17:39:44 crc kubenswrapper[4718]: I0123 17:39:44.054868 4718 scope.go:117] "RemoveContainer" containerID="1b86f24d94b3d613d665049fc8a762b418efb224036181c8fbc776bb07faab75" Jan 23 17:39:44 crc kubenswrapper[4718]: I0123 17:39:44.086064 4718 scope.go:117] "RemoveContainer" containerID="881a7d5387fd82b46393b6dd5bbcd10734cadd2f497a429064c755e52eba68f6" Jan 23 17:39:44 crc kubenswrapper[4718]: I0123 17:39:44.149955 4718 scope.go:117] "RemoveContainer" containerID="442eab51da60fd0f3940ceb37c96021c49b0f7553c66cb2f7b7e7d293ea66fea" Jan 23 17:39:44 crc kubenswrapper[4718]: E0123 17:39:44.150457 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"442eab51da60fd0f3940ceb37c96021c49b0f7553c66cb2f7b7e7d293ea66fea\": container with ID starting with 442eab51da60fd0f3940ceb37c96021c49b0f7553c66cb2f7b7e7d293ea66fea not found: ID does not exist" containerID="442eab51da60fd0f3940ceb37c96021c49b0f7553c66cb2f7b7e7d293ea66fea" Jan 23 17:39:44 crc kubenswrapper[4718]: I0123 17:39:44.150502 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"442eab51da60fd0f3940ceb37c96021c49b0f7553c66cb2f7b7e7d293ea66fea"} err="failed to get container status \"442eab51da60fd0f3940ceb37c96021c49b0f7553c66cb2f7b7e7d293ea66fea\": rpc error: code = NotFound desc = could not find container \"442eab51da60fd0f3940ceb37c96021c49b0f7553c66cb2f7b7e7d293ea66fea\": container with ID starting with 442eab51da60fd0f3940ceb37c96021c49b0f7553c66cb2f7b7e7d293ea66fea not found: ID does not exist" Jan 23 17:39:44 crc kubenswrapper[4718]: I0123 17:39:44.150544 4718 scope.go:117] "RemoveContainer" containerID="1b86f24d94b3d613d665049fc8a762b418efb224036181c8fbc776bb07faab75" Jan 23 17:39:44 crc kubenswrapper[4718]: E0123 17:39:44.151043 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b86f24d94b3d613d665049fc8a762b418efb224036181c8fbc776bb07faab75\": container with ID starting with 1b86f24d94b3d613d665049fc8a762b418efb224036181c8fbc776bb07faab75 not found: ID does not exist" containerID="1b86f24d94b3d613d665049fc8a762b418efb224036181c8fbc776bb07faab75" Jan 23 17:39:44 crc kubenswrapper[4718]: I0123 17:39:44.151067 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b86f24d94b3d613d665049fc8a762b418efb224036181c8fbc776bb07faab75"} err="failed to get container status \"1b86f24d94b3d613d665049fc8a762b418efb224036181c8fbc776bb07faab75\": rpc error: code = NotFound desc = could not find container \"1b86f24d94b3d613d665049fc8a762b418efb224036181c8fbc776bb07faab75\": container with ID starting with 1b86f24d94b3d613d665049fc8a762b418efb224036181c8fbc776bb07faab75 not found: ID does not exist" Jan 23 17:39:44 crc kubenswrapper[4718]: I0123 17:39:44.151082 4718 scope.go:117] "RemoveContainer" containerID="881a7d5387fd82b46393b6dd5bbcd10734cadd2f497a429064c755e52eba68f6" Jan 23 17:39:44 crc kubenswrapper[4718]: E0123 17:39:44.151339 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"881a7d5387fd82b46393b6dd5bbcd10734cadd2f497a429064c755e52eba68f6\": container with ID starting with 881a7d5387fd82b46393b6dd5bbcd10734cadd2f497a429064c755e52eba68f6 not found: ID does not exist" containerID="881a7d5387fd82b46393b6dd5bbcd10734cadd2f497a429064c755e52eba68f6" Jan 23 17:39:44 crc kubenswrapper[4718]: I0123 17:39:44.151358 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"881a7d5387fd82b46393b6dd5bbcd10734cadd2f497a429064c755e52eba68f6"} err="failed to get container status \"881a7d5387fd82b46393b6dd5bbcd10734cadd2f497a429064c755e52eba68f6\": rpc error: code = NotFound desc = could not find container \"881a7d5387fd82b46393b6dd5bbcd10734cadd2f497a429064c755e52eba68f6\": container with ID starting with 881a7d5387fd82b46393b6dd5bbcd10734cadd2f497a429064c755e52eba68f6 not found: ID does not exist" Jan 23 17:39:44 crc kubenswrapper[4718]: I0123 17:39:44.395728 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02543a99-0395-47e3-aeac-a42d7ddd9348-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02543a99-0395-47e3-aeac-a42d7ddd9348" (UID: "02543a99-0395-47e3-aeac-a42d7ddd9348"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:39:44 crc kubenswrapper[4718]: I0123 17:39:44.458882 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02543a99-0395-47e3-aeac-a42d7ddd9348-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:39:44 crc kubenswrapper[4718]: I0123 17:39:44.685930 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jw95x"] Jan 23 17:39:44 crc kubenswrapper[4718]: I0123 17:39:44.698078 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jw95x"] Jan 23 17:39:45 crc kubenswrapper[4718]: I0123 17:39:45.160874 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02543a99-0395-47e3-aeac-a42d7ddd9348" path="/var/lib/kubelet/pods/02543a99-0395-47e3-aeac-a42d7ddd9348/volumes" Jan 23 17:40:28 crc kubenswrapper[4718]: I0123 17:40:28.875529 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:40:28 crc kubenswrapper[4718]: I0123 17:40:28.876497 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:40:58 crc kubenswrapper[4718]: I0123 17:40:58.876217 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:40:58 crc kubenswrapper[4718]: I0123 17:40:58.876798 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:41:23 crc kubenswrapper[4718]: I0123 17:41:23.761151 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-np8zg"] Jan 23 17:41:23 crc kubenswrapper[4718]: E0123 17:41:23.762290 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02543a99-0395-47e3-aeac-a42d7ddd9348" containerName="extract-content" Jan 23 17:41:23 crc kubenswrapper[4718]: I0123 17:41:23.762304 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="02543a99-0395-47e3-aeac-a42d7ddd9348" containerName="extract-content" Jan 23 17:41:23 crc kubenswrapper[4718]: E0123 17:41:23.762326 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02543a99-0395-47e3-aeac-a42d7ddd9348" containerName="registry-server" Jan 23 17:41:23 crc kubenswrapper[4718]: I0123 17:41:23.762334 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="02543a99-0395-47e3-aeac-a42d7ddd9348" containerName="registry-server" Jan 23 17:41:23 crc kubenswrapper[4718]: E0123 17:41:23.762353 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02543a99-0395-47e3-aeac-a42d7ddd9348" containerName="extract-utilities" Jan 23 17:41:23 crc kubenswrapper[4718]: I0123 17:41:23.762359 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="02543a99-0395-47e3-aeac-a42d7ddd9348" containerName="extract-utilities" Jan 23 17:41:23 crc kubenswrapper[4718]: I0123 17:41:23.762666 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="02543a99-0395-47e3-aeac-a42d7ddd9348" containerName="registry-server" Jan 23 17:41:23 crc kubenswrapper[4718]: I0123 17:41:23.764859 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-np8zg" Jan 23 17:41:23 crc kubenswrapper[4718]: I0123 17:41:23.790245 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-np8zg"] Jan 23 17:41:23 crc kubenswrapper[4718]: I0123 17:41:23.866753 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36b21c3b-af9e-46bd-8e13-a86933a3ae7c-utilities\") pod \"redhat-operators-np8zg\" (UID: \"36b21c3b-af9e-46bd-8e13-a86933a3ae7c\") " pod="openshift-marketplace/redhat-operators-np8zg" Jan 23 17:41:23 crc kubenswrapper[4718]: I0123 17:41:23.866807 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36b21c3b-af9e-46bd-8e13-a86933a3ae7c-catalog-content\") pod \"redhat-operators-np8zg\" (UID: \"36b21c3b-af9e-46bd-8e13-a86933a3ae7c\") " pod="openshift-marketplace/redhat-operators-np8zg" Jan 23 17:41:23 crc kubenswrapper[4718]: I0123 17:41:23.866936 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrxq5\" (UniqueName: \"kubernetes.io/projected/36b21c3b-af9e-46bd-8e13-a86933a3ae7c-kube-api-access-qrxq5\") pod \"redhat-operators-np8zg\" (UID: \"36b21c3b-af9e-46bd-8e13-a86933a3ae7c\") " pod="openshift-marketplace/redhat-operators-np8zg" Jan 23 17:41:23 crc kubenswrapper[4718]: I0123 17:41:23.969555 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36b21c3b-af9e-46bd-8e13-a86933a3ae7c-utilities\") pod \"redhat-operators-np8zg\" (UID: \"36b21c3b-af9e-46bd-8e13-a86933a3ae7c\") " pod="openshift-marketplace/redhat-operators-np8zg" Jan 23 17:41:23 crc kubenswrapper[4718]: I0123 17:41:23.969614 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36b21c3b-af9e-46bd-8e13-a86933a3ae7c-catalog-content\") pod \"redhat-operators-np8zg\" (UID: \"36b21c3b-af9e-46bd-8e13-a86933a3ae7c\") " pod="openshift-marketplace/redhat-operators-np8zg" Jan 23 17:41:23 crc kubenswrapper[4718]: I0123 17:41:23.969675 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrxq5\" (UniqueName: \"kubernetes.io/projected/36b21c3b-af9e-46bd-8e13-a86933a3ae7c-kube-api-access-qrxq5\") pod \"redhat-operators-np8zg\" (UID: \"36b21c3b-af9e-46bd-8e13-a86933a3ae7c\") " pod="openshift-marketplace/redhat-operators-np8zg" Jan 23 17:41:23 crc kubenswrapper[4718]: I0123 17:41:23.970328 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36b21c3b-af9e-46bd-8e13-a86933a3ae7c-utilities\") pod \"redhat-operators-np8zg\" (UID: \"36b21c3b-af9e-46bd-8e13-a86933a3ae7c\") " pod="openshift-marketplace/redhat-operators-np8zg" Jan 23 17:41:23 crc kubenswrapper[4718]: I0123 17:41:23.970430 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36b21c3b-af9e-46bd-8e13-a86933a3ae7c-catalog-content\") pod \"redhat-operators-np8zg\" (UID: \"36b21c3b-af9e-46bd-8e13-a86933a3ae7c\") " pod="openshift-marketplace/redhat-operators-np8zg" Jan 23 17:41:23 crc kubenswrapper[4718]: I0123 17:41:23.995367 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrxq5\" (UniqueName: \"kubernetes.io/projected/36b21c3b-af9e-46bd-8e13-a86933a3ae7c-kube-api-access-qrxq5\") pod \"redhat-operators-np8zg\" (UID: \"36b21c3b-af9e-46bd-8e13-a86933a3ae7c\") " pod="openshift-marketplace/redhat-operators-np8zg" Jan 23 17:41:24 crc kubenswrapper[4718]: I0123 17:41:24.087368 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-np8zg" Jan 23 17:41:24 crc kubenswrapper[4718]: W0123 17:41:24.546370 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36b21c3b_af9e_46bd_8e13_a86933a3ae7c.slice/crio-eb7063ab5797dc7cda60b1c7ff90201b41a259c416e11c9b5470ea0052af6a19 WatchSource:0}: Error finding container eb7063ab5797dc7cda60b1c7ff90201b41a259c416e11c9b5470ea0052af6a19: Status 404 returned error can't find the container with id eb7063ab5797dc7cda60b1c7ff90201b41a259c416e11c9b5470ea0052af6a19 Jan 23 17:41:24 crc kubenswrapper[4718]: I0123 17:41:24.547249 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-np8zg"] Jan 23 17:41:24 crc kubenswrapper[4718]: I0123 17:41:24.797808 4718 generic.go:334] "Generic (PLEG): container finished" podID="36b21c3b-af9e-46bd-8e13-a86933a3ae7c" containerID="166140eaeeb832fc45870866b33226f8c65fe249946a9d0a75eafa09888a226e" exitCode=0 Jan 23 17:41:24 crc kubenswrapper[4718]: I0123 17:41:24.797858 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np8zg" event={"ID":"36b21c3b-af9e-46bd-8e13-a86933a3ae7c","Type":"ContainerDied","Data":"166140eaeeb832fc45870866b33226f8c65fe249946a9d0a75eafa09888a226e"} Jan 23 17:41:24 crc kubenswrapper[4718]: I0123 17:41:24.797892 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np8zg" event={"ID":"36b21c3b-af9e-46bd-8e13-a86933a3ae7c","Type":"ContainerStarted","Data":"eb7063ab5797dc7cda60b1c7ff90201b41a259c416e11c9b5470ea0052af6a19"} Jan 23 17:41:25 crc kubenswrapper[4718]: I0123 17:41:25.811217 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np8zg" event={"ID":"36b21c3b-af9e-46bd-8e13-a86933a3ae7c","Type":"ContainerStarted","Data":"64d416944766a7284ee584cb61fa0687868ef17c06be1e9f8bdcbb1172d1b4b7"} Jan 23 17:41:28 crc kubenswrapper[4718]: I0123 17:41:28.876077 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:41:28 crc kubenswrapper[4718]: I0123 17:41:28.876618 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:41:28 crc kubenswrapper[4718]: I0123 17:41:28.876692 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 17:41:28 crc kubenswrapper[4718]: I0123 17:41:28.877848 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15073422a149a95e5ce263cbfa8fa789642a6c6e12faa0a6202d2065d924068a"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:41:28 crc kubenswrapper[4718]: I0123 17:41:28.877919 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://15073422a149a95e5ce263cbfa8fa789642a6c6e12faa0a6202d2065d924068a" gracePeriod=600 Jan 23 17:41:29 crc kubenswrapper[4718]: I0123 17:41:29.852939 4718 generic.go:334] "Generic (PLEG): container finished" podID="36b21c3b-af9e-46bd-8e13-a86933a3ae7c" containerID="64d416944766a7284ee584cb61fa0687868ef17c06be1e9f8bdcbb1172d1b4b7" exitCode=0 Jan 23 17:41:29 crc kubenswrapper[4718]: I0123 17:41:29.853014 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np8zg" event={"ID":"36b21c3b-af9e-46bd-8e13-a86933a3ae7c","Type":"ContainerDied","Data":"64d416944766a7284ee584cb61fa0687868ef17c06be1e9f8bdcbb1172d1b4b7"} Jan 23 17:41:29 crc kubenswrapper[4718]: I0123 17:41:29.867392 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="15073422a149a95e5ce263cbfa8fa789642a6c6e12faa0a6202d2065d924068a" exitCode=0 Jan 23 17:41:29 crc kubenswrapper[4718]: I0123 17:41:29.867432 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"15073422a149a95e5ce263cbfa8fa789642a6c6e12faa0a6202d2065d924068a"} Jan 23 17:41:29 crc kubenswrapper[4718]: I0123 17:41:29.867458 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df"} Jan 23 17:41:29 crc kubenswrapper[4718]: I0123 17:41:29.867472 4718 scope.go:117] "RemoveContainer" containerID="5246212f5653bcab18a19f5faf60a65e8c153b774fb9d452d76da15da098bf84" Jan 23 17:41:30 crc kubenswrapper[4718]: I0123 17:41:30.884289 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np8zg" event={"ID":"36b21c3b-af9e-46bd-8e13-a86933a3ae7c","Type":"ContainerStarted","Data":"30239ec390a802b5ae4608249f5a774fe974fc34054b7f4fe4aba46ba0bce235"} Jan 23 17:41:30 crc kubenswrapper[4718]: I0123 17:41:30.903815 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-np8zg" podStartSLOduration=2.204197641 podStartE2EDuration="7.903799649s" podCreationTimestamp="2026-01-23 17:41:23 +0000 UTC" firstStartedPulling="2026-01-23 17:41:24.799563258 +0000 UTC m=+5085.946805249" lastFinishedPulling="2026-01-23 17:41:30.499165266 +0000 UTC m=+5091.646407257" observedRunningTime="2026-01-23 17:41:30.90016377 +0000 UTC m=+5092.047405771" watchObservedRunningTime="2026-01-23 17:41:30.903799649 +0000 UTC m=+5092.051041640" Jan 23 17:41:34 crc kubenswrapper[4718]: I0123 17:41:34.087901 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-np8zg" Jan 23 17:41:34 crc kubenswrapper[4718]: I0123 17:41:34.088308 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-np8zg" Jan 23 17:41:35 crc kubenswrapper[4718]: I0123 17:41:35.146255 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-np8zg" podUID="36b21c3b-af9e-46bd-8e13-a86933a3ae7c" containerName="registry-server" probeResult="failure" output=< Jan 23 17:41:35 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Jan 23 17:41:35 crc kubenswrapper[4718]: > Jan 23 17:41:44 crc kubenswrapper[4718]: I0123 17:41:44.160153 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-np8zg" Jan 23 17:41:44 crc kubenswrapper[4718]: I0123 17:41:44.216751 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-np8zg" Jan 23 17:41:45 crc kubenswrapper[4718]: I0123 17:41:45.016242 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-np8zg"] Jan 23 17:41:46 crc kubenswrapper[4718]: I0123 17:41:46.046447 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-np8zg" podUID="36b21c3b-af9e-46bd-8e13-a86933a3ae7c" containerName="registry-server" containerID="cri-o://30239ec390a802b5ae4608249f5a774fe974fc34054b7f4fe4aba46ba0bce235" gracePeriod=2 Jan 23 17:41:46 crc kubenswrapper[4718]: I0123 17:41:46.629803 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-np8zg" Jan 23 17:41:46 crc kubenswrapper[4718]: I0123 17:41:46.728270 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrxq5\" (UniqueName: \"kubernetes.io/projected/36b21c3b-af9e-46bd-8e13-a86933a3ae7c-kube-api-access-qrxq5\") pod \"36b21c3b-af9e-46bd-8e13-a86933a3ae7c\" (UID: \"36b21c3b-af9e-46bd-8e13-a86933a3ae7c\") " Jan 23 17:41:46 crc kubenswrapper[4718]: I0123 17:41:46.728378 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36b21c3b-af9e-46bd-8e13-a86933a3ae7c-utilities\") pod \"36b21c3b-af9e-46bd-8e13-a86933a3ae7c\" (UID: \"36b21c3b-af9e-46bd-8e13-a86933a3ae7c\") " Jan 23 17:41:46 crc kubenswrapper[4718]: I0123 17:41:46.728524 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36b21c3b-af9e-46bd-8e13-a86933a3ae7c-catalog-content\") pod \"36b21c3b-af9e-46bd-8e13-a86933a3ae7c\" (UID: \"36b21c3b-af9e-46bd-8e13-a86933a3ae7c\") " Jan 23 17:41:46 crc kubenswrapper[4718]: I0123 17:41:46.729331 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36b21c3b-af9e-46bd-8e13-a86933a3ae7c-utilities" (OuterVolumeSpecName: "utilities") pod "36b21c3b-af9e-46bd-8e13-a86933a3ae7c" (UID: "36b21c3b-af9e-46bd-8e13-a86933a3ae7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:41:46 crc kubenswrapper[4718]: I0123 17:41:46.729525 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36b21c3b-af9e-46bd-8e13-a86933a3ae7c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:41:46 crc kubenswrapper[4718]: I0123 17:41:46.745772 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36b21c3b-af9e-46bd-8e13-a86933a3ae7c-kube-api-access-qrxq5" (OuterVolumeSpecName: "kube-api-access-qrxq5") pod "36b21c3b-af9e-46bd-8e13-a86933a3ae7c" (UID: "36b21c3b-af9e-46bd-8e13-a86933a3ae7c"). InnerVolumeSpecName "kube-api-access-qrxq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:41:46 crc kubenswrapper[4718]: I0123 17:41:46.831423 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrxq5\" (UniqueName: \"kubernetes.io/projected/36b21c3b-af9e-46bd-8e13-a86933a3ae7c-kube-api-access-qrxq5\") on node \"crc\" DevicePath \"\"" Jan 23 17:41:46 crc kubenswrapper[4718]: I0123 17:41:46.836439 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36b21c3b-af9e-46bd-8e13-a86933a3ae7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36b21c3b-af9e-46bd-8e13-a86933a3ae7c" (UID: "36b21c3b-af9e-46bd-8e13-a86933a3ae7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:41:46 crc kubenswrapper[4718]: I0123 17:41:46.934065 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36b21c3b-af9e-46bd-8e13-a86933a3ae7c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:41:47 crc kubenswrapper[4718]: I0123 17:41:47.060881 4718 generic.go:334] "Generic (PLEG): container finished" podID="36b21c3b-af9e-46bd-8e13-a86933a3ae7c" containerID="30239ec390a802b5ae4608249f5a774fe974fc34054b7f4fe4aba46ba0bce235" exitCode=0 Jan 23 17:41:47 crc kubenswrapper[4718]: I0123 17:41:47.060925 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np8zg" event={"ID":"36b21c3b-af9e-46bd-8e13-a86933a3ae7c","Type":"ContainerDied","Data":"30239ec390a802b5ae4608249f5a774fe974fc34054b7f4fe4aba46ba0bce235"} Jan 23 17:41:47 crc kubenswrapper[4718]: I0123 17:41:47.060952 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np8zg" event={"ID":"36b21c3b-af9e-46bd-8e13-a86933a3ae7c","Type":"ContainerDied","Data":"eb7063ab5797dc7cda60b1c7ff90201b41a259c416e11c9b5470ea0052af6a19"} Jan 23 17:41:47 crc kubenswrapper[4718]: I0123 17:41:47.061000 4718 scope.go:117] "RemoveContainer" containerID="30239ec390a802b5ae4608249f5a774fe974fc34054b7f4fe4aba46ba0bce235" Jan 23 17:41:47 crc kubenswrapper[4718]: I0123 17:41:47.061135 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-np8zg" Jan 23 17:41:47 crc kubenswrapper[4718]: I0123 17:41:47.094973 4718 scope.go:117] "RemoveContainer" containerID="64d416944766a7284ee584cb61fa0687868ef17c06be1e9f8bdcbb1172d1b4b7" Jan 23 17:41:47 crc kubenswrapper[4718]: I0123 17:41:47.097515 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-np8zg"] Jan 23 17:41:47 crc kubenswrapper[4718]: I0123 17:41:47.110274 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-np8zg"] Jan 23 17:41:47 crc kubenswrapper[4718]: I0123 17:41:47.129529 4718 scope.go:117] "RemoveContainer" containerID="166140eaeeb832fc45870866b33226f8c65fe249946a9d0a75eafa09888a226e" Jan 23 17:41:47 crc kubenswrapper[4718]: I0123 17:41:47.168949 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36b21c3b-af9e-46bd-8e13-a86933a3ae7c" path="/var/lib/kubelet/pods/36b21c3b-af9e-46bd-8e13-a86933a3ae7c/volumes" Jan 23 17:41:47 crc kubenswrapper[4718]: I0123 17:41:47.195397 4718 scope.go:117] "RemoveContainer" containerID="30239ec390a802b5ae4608249f5a774fe974fc34054b7f4fe4aba46ba0bce235" Jan 23 17:41:47 crc kubenswrapper[4718]: E0123 17:41:47.196491 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30239ec390a802b5ae4608249f5a774fe974fc34054b7f4fe4aba46ba0bce235\": container with ID starting with 30239ec390a802b5ae4608249f5a774fe974fc34054b7f4fe4aba46ba0bce235 not found: ID does not exist" containerID="30239ec390a802b5ae4608249f5a774fe974fc34054b7f4fe4aba46ba0bce235" Jan 23 17:41:47 crc kubenswrapper[4718]: I0123 17:41:47.196544 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30239ec390a802b5ae4608249f5a774fe974fc34054b7f4fe4aba46ba0bce235"} err="failed to get container status \"30239ec390a802b5ae4608249f5a774fe974fc34054b7f4fe4aba46ba0bce235\": rpc error: code = NotFound desc = could not find container \"30239ec390a802b5ae4608249f5a774fe974fc34054b7f4fe4aba46ba0bce235\": container with ID starting with 30239ec390a802b5ae4608249f5a774fe974fc34054b7f4fe4aba46ba0bce235 not found: ID does not exist" Jan 23 17:41:47 crc kubenswrapper[4718]: I0123 17:41:47.196574 4718 scope.go:117] "RemoveContainer" containerID="64d416944766a7284ee584cb61fa0687868ef17c06be1e9f8bdcbb1172d1b4b7" Jan 23 17:41:47 crc kubenswrapper[4718]: E0123 17:41:47.197129 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64d416944766a7284ee584cb61fa0687868ef17c06be1e9f8bdcbb1172d1b4b7\": container with ID starting with 64d416944766a7284ee584cb61fa0687868ef17c06be1e9f8bdcbb1172d1b4b7 not found: ID does not exist" containerID="64d416944766a7284ee584cb61fa0687868ef17c06be1e9f8bdcbb1172d1b4b7" Jan 23 17:41:47 crc kubenswrapper[4718]: I0123 17:41:47.197162 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64d416944766a7284ee584cb61fa0687868ef17c06be1e9f8bdcbb1172d1b4b7"} err="failed to get container status \"64d416944766a7284ee584cb61fa0687868ef17c06be1e9f8bdcbb1172d1b4b7\": rpc error: code = NotFound desc = could not find container \"64d416944766a7284ee584cb61fa0687868ef17c06be1e9f8bdcbb1172d1b4b7\": container with ID starting with 64d416944766a7284ee584cb61fa0687868ef17c06be1e9f8bdcbb1172d1b4b7 not found: ID does not exist" Jan 23 17:41:47 crc kubenswrapper[4718]: I0123 17:41:47.197185 4718 scope.go:117] "RemoveContainer" containerID="166140eaeeb832fc45870866b33226f8c65fe249946a9d0a75eafa09888a226e" Jan 23 17:41:47 crc kubenswrapper[4718]: E0123 17:41:47.197448 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"166140eaeeb832fc45870866b33226f8c65fe249946a9d0a75eafa09888a226e\": container with ID starting with 166140eaeeb832fc45870866b33226f8c65fe249946a9d0a75eafa09888a226e not found: ID does not exist" containerID="166140eaeeb832fc45870866b33226f8c65fe249946a9d0a75eafa09888a226e" Jan 23 17:41:47 crc kubenswrapper[4718]: I0123 17:41:47.197469 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"166140eaeeb832fc45870866b33226f8c65fe249946a9d0a75eafa09888a226e"} err="failed to get container status \"166140eaeeb832fc45870866b33226f8c65fe249946a9d0a75eafa09888a226e\": rpc error: code = NotFound desc = could not find container \"166140eaeeb832fc45870866b33226f8c65fe249946a9d0a75eafa09888a226e\": container with ID starting with 166140eaeeb832fc45870866b33226f8c65fe249946a9d0a75eafa09888a226e not found: ID does not exist" Jan 23 17:43:10 crc kubenswrapper[4718]: I0123 17:43:10.789840 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5b6b78dc95-9ft97" podUID="6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 23 17:43:58 crc kubenswrapper[4718]: I0123 17:43:58.875420 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:43:58 crc kubenswrapper[4718]: I0123 17:43:58.875953 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:44:06 crc kubenswrapper[4718]: I0123 17:44:06.551300 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6vdt8"] Jan 23 17:44:06 crc kubenswrapper[4718]: E0123 17:44:06.552270 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36b21c3b-af9e-46bd-8e13-a86933a3ae7c" containerName="registry-server" Jan 23 17:44:06 crc kubenswrapper[4718]: I0123 17:44:06.552283 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b21c3b-af9e-46bd-8e13-a86933a3ae7c" containerName="registry-server" Jan 23 17:44:06 crc kubenswrapper[4718]: E0123 17:44:06.552330 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36b21c3b-af9e-46bd-8e13-a86933a3ae7c" containerName="extract-content" Jan 23 17:44:06 crc kubenswrapper[4718]: I0123 17:44:06.552337 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b21c3b-af9e-46bd-8e13-a86933a3ae7c" containerName="extract-content" Jan 23 17:44:06 crc kubenswrapper[4718]: E0123 17:44:06.552356 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36b21c3b-af9e-46bd-8e13-a86933a3ae7c" containerName="extract-utilities" Jan 23 17:44:06 crc kubenswrapper[4718]: I0123 17:44:06.552364 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b21c3b-af9e-46bd-8e13-a86933a3ae7c" containerName="extract-utilities" Jan 23 17:44:06 crc kubenswrapper[4718]: I0123 17:44:06.552588 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="36b21c3b-af9e-46bd-8e13-a86933a3ae7c" containerName="registry-server" Jan 23 17:44:06 crc kubenswrapper[4718]: I0123 17:44:06.554331 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vdt8" Jan 23 17:44:06 crc kubenswrapper[4718]: I0123 17:44:06.565935 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6vdt8"] Jan 23 17:44:06 crc kubenswrapper[4718]: I0123 17:44:06.673158 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bebe1ce-ea1f-4946-9852-bfd3881df6cd-catalog-content\") pod \"community-operators-6vdt8\" (UID: \"6bebe1ce-ea1f-4946-9852-bfd3881df6cd\") " pod="openshift-marketplace/community-operators-6vdt8" Jan 23 17:44:06 crc kubenswrapper[4718]: I0123 17:44:06.673888 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-894pn\" (UniqueName: \"kubernetes.io/projected/6bebe1ce-ea1f-4946-9852-bfd3881df6cd-kube-api-access-894pn\") pod \"community-operators-6vdt8\" (UID: \"6bebe1ce-ea1f-4946-9852-bfd3881df6cd\") " pod="openshift-marketplace/community-operators-6vdt8" Jan 23 17:44:06 crc kubenswrapper[4718]: I0123 17:44:06.673946 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bebe1ce-ea1f-4946-9852-bfd3881df6cd-utilities\") pod \"community-operators-6vdt8\" (UID: \"6bebe1ce-ea1f-4946-9852-bfd3881df6cd\") " pod="openshift-marketplace/community-operators-6vdt8" Jan 23 17:44:06 crc kubenswrapper[4718]: I0123 17:44:06.776423 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-894pn\" (UniqueName: \"kubernetes.io/projected/6bebe1ce-ea1f-4946-9852-bfd3881df6cd-kube-api-access-894pn\") pod \"community-operators-6vdt8\" (UID: \"6bebe1ce-ea1f-4946-9852-bfd3881df6cd\") " pod="openshift-marketplace/community-operators-6vdt8" Jan 23 17:44:06 crc kubenswrapper[4718]: I0123 17:44:06.776481 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bebe1ce-ea1f-4946-9852-bfd3881df6cd-utilities\") pod \"community-operators-6vdt8\" (UID: \"6bebe1ce-ea1f-4946-9852-bfd3881df6cd\") " pod="openshift-marketplace/community-operators-6vdt8" Jan 23 17:44:06 crc kubenswrapper[4718]: I0123 17:44:06.776583 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bebe1ce-ea1f-4946-9852-bfd3881df6cd-catalog-content\") pod \"community-operators-6vdt8\" (UID: \"6bebe1ce-ea1f-4946-9852-bfd3881df6cd\") " pod="openshift-marketplace/community-operators-6vdt8" Jan 23 17:44:06 crc kubenswrapper[4718]: I0123 17:44:06.777149 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bebe1ce-ea1f-4946-9852-bfd3881df6cd-utilities\") pod \"community-operators-6vdt8\" (UID: \"6bebe1ce-ea1f-4946-9852-bfd3881df6cd\") " pod="openshift-marketplace/community-operators-6vdt8" Jan 23 17:44:06 crc kubenswrapper[4718]: I0123 17:44:06.777202 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bebe1ce-ea1f-4946-9852-bfd3881df6cd-catalog-content\") pod \"community-operators-6vdt8\" (UID: \"6bebe1ce-ea1f-4946-9852-bfd3881df6cd\") " pod="openshift-marketplace/community-operators-6vdt8" Jan 23 17:44:06 crc kubenswrapper[4718]: I0123 17:44:06.800203 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-894pn\" (UniqueName: \"kubernetes.io/projected/6bebe1ce-ea1f-4946-9852-bfd3881df6cd-kube-api-access-894pn\") pod \"community-operators-6vdt8\" (UID: \"6bebe1ce-ea1f-4946-9852-bfd3881df6cd\") " pod="openshift-marketplace/community-operators-6vdt8" Jan 23 17:44:06 crc kubenswrapper[4718]: I0123 17:44:06.876387 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vdt8" Jan 23 17:44:07 crc kubenswrapper[4718]: I0123 17:44:07.496022 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6vdt8"] Jan 23 17:44:07 crc kubenswrapper[4718]: I0123 17:44:07.543855 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vdt8" event={"ID":"6bebe1ce-ea1f-4946-9852-bfd3881df6cd","Type":"ContainerStarted","Data":"ba7736fce2f651f8b03c9d35acf16ab34680e7605bf39cddd7669335126bd56e"} Jan 23 17:44:08 crc kubenswrapper[4718]: I0123 17:44:08.555043 4718 generic.go:334] "Generic (PLEG): container finished" podID="6bebe1ce-ea1f-4946-9852-bfd3881df6cd" containerID="6cd3fc7e2e7f0dd799757b7ffefbfde4925116887beba1495ee02548f4d9d2b4" exitCode=0 Jan 23 17:44:08 crc kubenswrapper[4718]: I0123 17:44:08.555275 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vdt8" event={"ID":"6bebe1ce-ea1f-4946-9852-bfd3881df6cd","Type":"ContainerDied","Data":"6cd3fc7e2e7f0dd799757b7ffefbfde4925116887beba1495ee02548f4d9d2b4"} Jan 23 17:44:09 crc kubenswrapper[4718]: I0123 17:44:09.589303 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vdt8" event={"ID":"6bebe1ce-ea1f-4946-9852-bfd3881df6cd","Type":"ContainerStarted","Data":"2e97c1b1f09b0bab91109bf0737a9887e091c3fe2bb780e26af4cb5d55bdd4e6"} Jan 23 17:44:10 crc kubenswrapper[4718]: I0123 17:44:10.600971 4718 generic.go:334] "Generic (PLEG): container finished" podID="6bebe1ce-ea1f-4946-9852-bfd3881df6cd" containerID="2e97c1b1f09b0bab91109bf0737a9887e091c3fe2bb780e26af4cb5d55bdd4e6" exitCode=0 Jan 23 17:44:10 crc kubenswrapper[4718]: I0123 17:44:10.601013 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vdt8" event={"ID":"6bebe1ce-ea1f-4946-9852-bfd3881df6cd","Type":"ContainerDied","Data":"2e97c1b1f09b0bab91109bf0737a9887e091c3fe2bb780e26af4cb5d55bdd4e6"} Jan 23 17:44:11 crc kubenswrapper[4718]: I0123 17:44:11.616223 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vdt8" event={"ID":"6bebe1ce-ea1f-4946-9852-bfd3881df6cd","Type":"ContainerStarted","Data":"604395bf0d1059b56b643ac7c0fb5e0b05bd65eb302b77ad23a8b0ec0d976c4b"} Jan 23 17:44:11 crc kubenswrapper[4718]: I0123 17:44:11.642224 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6vdt8" podStartSLOduration=3.131515298 podStartE2EDuration="5.642184107s" podCreationTimestamp="2026-01-23 17:44:06 +0000 UTC" firstStartedPulling="2026-01-23 17:44:08.558924262 +0000 UTC m=+5249.706166253" lastFinishedPulling="2026-01-23 17:44:11.069593031 +0000 UTC m=+5252.216835062" observedRunningTime="2026-01-23 17:44:11.634695804 +0000 UTC m=+5252.781937815" watchObservedRunningTime="2026-01-23 17:44:11.642184107 +0000 UTC m=+5252.789426098" Jan 23 17:44:16 crc kubenswrapper[4718]: I0123 17:44:16.877237 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6vdt8" Jan 23 17:44:16 crc kubenswrapper[4718]: I0123 17:44:16.877836 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6vdt8" Jan 23 17:44:16 crc kubenswrapper[4718]: I0123 17:44:16.927509 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6vdt8" Jan 23 17:44:17 crc kubenswrapper[4718]: I0123 17:44:17.729902 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6vdt8" Jan 23 17:44:17 crc kubenswrapper[4718]: I0123 17:44:17.782373 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6vdt8"] Jan 23 17:44:19 crc kubenswrapper[4718]: I0123 17:44:19.695951 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6vdt8" podUID="6bebe1ce-ea1f-4946-9852-bfd3881df6cd" containerName="registry-server" containerID="cri-o://604395bf0d1059b56b643ac7c0fb5e0b05bd65eb302b77ad23a8b0ec0d976c4b" gracePeriod=2 Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.285897 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vdt8" Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.429583 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-894pn\" (UniqueName: \"kubernetes.io/projected/6bebe1ce-ea1f-4946-9852-bfd3881df6cd-kube-api-access-894pn\") pod \"6bebe1ce-ea1f-4946-9852-bfd3881df6cd\" (UID: \"6bebe1ce-ea1f-4946-9852-bfd3881df6cd\") " Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.430332 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bebe1ce-ea1f-4946-9852-bfd3881df6cd-utilities\") pod \"6bebe1ce-ea1f-4946-9852-bfd3881df6cd\" (UID: \"6bebe1ce-ea1f-4946-9852-bfd3881df6cd\") " Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.430403 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bebe1ce-ea1f-4946-9852-bfd3881df6cd-catalog-content\") pod \"6bebe1ce-ea1f-4946-9852-bfd3881df6cd\" (UID: \"6bebe1ce-ea1f-4946-9852-bfd3881df6cd\") " Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.431235 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bebe1ce-ea1f-4946-9852-bfd3881df6cd-utilities" (OuterVolumeSpecName: "utilities") pod "6bebe1ce-ea1f-4946-9852-bfd3881df6cd" (UID: "6bebe1ce-ea1f-4946-9852-bfd3881df6cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.432187 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bebe1ce-ea1f-4946-9852-bfd3881df6cd-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.435677 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bebe1ce-ea1f-4946-9852-bfd3881df6cd-kube-api-access-894pn" (OuterVolumeSpecName: "kube-api-access-894pn") pod "6bebe1ce-ea1f-4946-9852-bfd3881df6cd" (UID: "6bebe1ce-ea1f-4946-9852-bfd3881df6cd"). InnerVolumeSpecName "kube-api-access-894pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.477767 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bebe1ce-ea1f-4946-9852-bfd3881df6cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6bebe1ce-ea1f-4946-9852-bfd3881df6cd" (UID: "6bebe1ce-ea1f-4946-9852-bfd3881df6cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.534425 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bebe1ce-ea1f-4946-9852-bfd3881df6cd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.534465 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-894pn\" (UniqueName: \"kubernetes.io/projected/6bebe1ce-ea1f-4946-9852-bfd3881df6cd-kube-api-access-894pn\") on node \"crc\" DevicePath \"\"" Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.707433 4718 generic.go:334] "Generic (PLEG): container finished" podID="6bebe1ce-ea1f-4946-9852-bfd3881df6cd" containerID="604395bf0d1059b56b643ac7c0fb5e0b05bd65eb302b77ad23a8b0ec0d976c4b" exitCode=0 Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.707482 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vdt8" event={"ID":"6bebe1ce-ea1f-4946-9852-bfd3881df6cd","Type":"ContainerDied","Data":"604395bf0d1059b56b643ac7c0fb5e0b05bd65eb302b77ad23a8b0ec0d976c4b"} Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.707505 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vdt8" Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.707524 4718 scope.go:117] "RemoveContainer" containerID="604395bf0d1059b56b643ac7c0fb5e0b05bd65eb302b77ad23a8b0ec0d976c4b" Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.707512 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vdt8" event={"ID":"6bebe1ce-ea1f-4946-9852-bfd3881df6cd","Type":"ContainerDied","Data":"ba7736fce2f651f8b03c9d35acf16ab34680e7605bf39cddd7669335126bd56e"} Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.734034 4718 scope.go:117] "RemoveContainer" containerID="2e97c1b1f09b0bab91109bf0737a9887e091c3fe2bb780e26af4cb5d55bdd4e6" Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.757816 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6vdt8"] Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.762667 4718 scope.go:117] "RemoveContainer" containerID="6cd3fc7e2e7f0dd799757b7ffefbfde4925116887beba1495ee02548f4d9d2b4" Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.780174 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6vdt8"] Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.821708 4718 scope.go:117] "RemoveContainer" containerID="604395bf0d1059b56b643ac7c0fb5e0b05bd65eb302b77ad23a8b0ec0d976c4b" Jan 23 17:44:20 crc kubenswrapper[4718]: E0123 17:44:20.824655 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"604395bf0d1059b56b643ac7c0fb5e0b05bd65eb302b77ad23a8b0ec0d976c4b\": container with ID starting with 604395bf0d1059b56b643ac7c0fb5e0b05bd65eb302b77ad23a8b0ec0d976c4b not found: ID does not exist" containerID="604395bf0d1059b56b643ac7c0fb5e0b05bd65eb302b77ad23a8b0ec0d976c4b" Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.824697 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"604395bf0d1059b56b643ac7c0fb5e0b05bd65eb302b77ad23a8b0ec0d976c4b"} err="failed to get container status \"604395bf0d1059b56b643ac7c0fb5e0b05bd65eb302b77ad23a8b0ec0d976c4b\": rpc error: code = NotFound desc = could not find container \"604395bf0d1059b56b643ac7c0fb5e0b05bd65eb302b77ad23a8b0ec0d976c4b\": container with ID starting with 604395bf0d1059b56b643ac7c0fb5e0b05bd65eb302b77ad23a8b0ec0d976c4b not found: ID does not exist" Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.824727 4718 scope.go:117] "RemoveContainer" containerID="2e97c1b1f09b0bab91109bf0737a9887e091c3fe2bb780e26af4cb5d55bdd4e6" Jan 23 17:44:20 crc kubenswrapper[4718]: E0123 17:44:20.825166 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e97c1b1f09b0bab91109bf0737a9887e091c3fe2bb780e26af4cb5d55bdd4e6\": container with ID starting with 2e97c1b1f09b0bab91109bf0737a9887e091c3fe2bb780e26af4cb5d55bdd4e6 not found: ID does not exist" containerID="2e97c1b1f09b0bab91109bf0737a9887e091c3fe2bb780e26af4cb5d55bdd4e6" Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.825224 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e97c1b1f09b0bab91109bf0737a9887e091c3fe2bb780e26af4cb5d55bdd4e6"} err="failed to get container status \"2e97c1b1f09b0bab91109bf0737a9887e091c3fe2bb780e26af4cb5d55bdd4e6\": rpc error: code = NotFound desc = could not find container \"2e97c1b1f09b0bab91109bf0737a9887e091c3fe2bb780e26af4cb5d55bdd4e6\": container with ID starting with 2e97c1b1f09b0bab91109bf0737a9887e091c3fe2bb780e26af4cb5d55bdd4e6 not found: ID does not exist" Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.825252 4718 scope.go:117] "RemoveContainer" containerID="6cd3fc7e2e7f0dd799757b7ffefbfde4925116887beba1495ee02548f4d9d2b4" Jan 23 17:44:20 crc kubenswrapper[4718]: E0123 17:44:20.825834 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd3fc7e2e7f0dd799757b7ffefbfde4925116887beba1495ee02548f4d9d2b4\": container with ID starting with 6cd3fc7e2e7f0dd799757b7ffefbfde4925116887beba1495ee02548f4d9d2b4 not found: ID does not exist" containerID="6cd3fc7e2e7f0dd799757b7ffefbfde4925116887beba1495ee02548f4d9d2b4" Jan 23 17:44:20 crc kubenswrapper[4718]: I0123 17:44:20.825874 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd3fc7e2e7f0dd799757b7ffefbfde4925116887beba1495ee02548f4d9d2b4"} err="failed to get container status \"6cd3fc7e2e7f0dd799757b7ffefbfde4925116887beba1495ee02548f4d9d2b4\": rpc error: code = NotFound desc = could not find container \"6cd3fc7e2e7f0dd799757b7ffefbfde4925116887beba1495ee02548f4d9d2b4\": container with ID starting with 6cd3fc7e2e7f0dd799757b7ffefbfde4925116887beba1495ee02548f4d9d2b4 not found: ID does not exist" Jan 23 17:44:21 crc kubenswrapper[4718]: I0123 17:44:21.155896 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bebe1ce-ea1f-4946-9852-bfd3881df6cd" path="/var/lib/kubelet/pods/6bebe1ce-ea1f-4946-9852-bfd3881df6cd/volumes" Jan 23 17:44:28 crc kubenswrapper[4718]: I0123 17:44:28.875711 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:44:28 crc kubenswrapper[4718]: I0123 17:44:28.876264 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:44:58 crc kubenswrapper[4718]: I0123 17:44:58.876044 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:44:58 crc kubenswrapper[4718]: I0123 17:44:58.877377 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:44:58 crc kubenswrapper[4718]: I0123 17:44:58.877486 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 17:44:58 crc kubenswrapper[4718]: I0123 17:44:58.878554 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:44:58 crc kubenswrapper[4718]: I0123 17:44:58.878623 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" gracePeriod=600 Jan 23 17:44:59 crc kubenswrapper[4718]: E0123 17:44:59.008054 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:44:59 crc kubenswrapper[4718]: I0123 17:44:59.144892 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" exitCode=0 Jan 23 17:44:59 crc kubenswrapper[4718]: I0123 17:44:59.155230 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df"} Jan 23 17:44:59 crc kubenswrapper[4718]: I0123 17:44:59.155297 4718 scope.go:117] "RemoveContainer" containerID="15073422a149a95e5ce263cbfa8fa789642a6c6e12faa0a6202d2065d924068a" Jan 23 17:44:59 crc kubenswrapper[4718]: I0123 17:44:59.156159 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:44:59 crc kubenswrapper[4718]: E0123 17:44:59.156471 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.153079 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g"] Jan 23 17:45:00 crc kubenswrapper[4718]: E0123 17:45:00.154188 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bebe1ce-ea1f-4946-9852-bfd3881df6cd" containerName="extract-utilities" Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.154206 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bebe1ce-ea1f-4946-9852-bfd3881df6cd" containerName="extract-utilities" Jan 23 17:45:00 crc kubenswrapper[4718]: E0123 17:45:00.154237 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bebe1ce-ea1f-4946-9852-bfd3881df6cd" containerName="registry-server" Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.154243 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bebe1ce-ea1f-4946-9852-bfd3881df6cd" containerName="registry-server" Jan 23 17:45:00 crc kubenswrapper[4718]: E0123 17:45:00.154269 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bebe1ce-ea1f-4946-9852-bfd3881df6cd" containerName="extract-content" Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.154275 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bebe1ce-ea1f-4946-9852-bfd3881df6cd" containerName="extract-content" Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.154509 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bebe1ce-ea1f-4946-9852-bfd3881df6cd" containerName="registry-server" Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.155313 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g" Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.168439 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.168442 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.182789 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g"] Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.257803 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28e29e43-8631-4f49-8c16-6cf7448e947c-config-volume\") pod \"collect-profiles-29486505-kwd5g\" (UID: \"28e29e43-8631-4f49-8c16-6cf7448e947c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g" Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.257970 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28e29e43-8631-4f49-8c16-6cf7448e947c-secret-volume\") pod \"collect-profiles-29486505-kwd5g\" (UID: \"28e29e43-8631-4f49-8c16-6cf7448e947c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g" Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.258106 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjnz6\" (UniqueName: \"kubernetes.io/projected/28e29e43-8631-4f49-8c16-6cf7448e947c-kube-api-access-mjnz6\") pod \"collect-profiles-29486505-kwd5g\" (UID: \"28e29e43-8631-4f49-8c16-6cf7448e947c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g" Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.361441 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28e29e43-8631-4f49-8c16-6cf7448e947c-config-volume\") pod \"collect-profiles-29486505-kwd5g\" (UID: \"28e29e43-8631-4f49-8c16-6cf7448e947c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g" Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.361552 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28e29e43-8631-4f49-8c16-6cf7448e947c-secret-volume\") pod \"collect-profiles-29486505-kwd5g\" (UID: \"28e29e43-8631-4f49-8c16-6cf7448e947c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g" Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.361601 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjnz6\" (UniqueName: \"kubernetes.io/projected/28e29e43-8631-4f49-8c16-6cf7448e947c-kube-api-access-mjnz6\") pod \"collect-profiles-29486505-kwd5g\" (UID: \"28e29e43-8631-4f49-8c16-6cf7448e947c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g" Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.362312 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28e29e43-8631-4f49-8c16-6cf7448e947c-config-volume\") pod \"collect-profiles-29486505-kwd5g\" (UID: \"28e29e43-8631-4f49-8c16-6cf7448e947c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g" Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.544132 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28e29e43-8631-4f49-8c16-6cf7448e947c-secret-volume\") pod \"collect-profiles-29486505-kwd5g\" (UID: \"28e29e43-8631-4f49-8c16-6cf7448e947c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g" Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.544206 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjnz6\" (UniqueName: \"kubernetes.io/projected/28e29e43-8631-4f49-8c16-6cf7448e947c-kube-api-access-mjnz6\") pod \"collect-profiles-29486505-kwd5g\" (UID: \"28e29e43-8631-4f49-8c16-6cf7448e947c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g" Jan 23 17:45:00 crc kubenswrapper[4718]: I0123 17:45:00.787480 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g" Jan 23 17:45:01 crc kubenswrapper[4718]: I0123 17:45:01.258707 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g"] Jan 23 17:45:01 crc kubenswrapper[4718]: W0123 17:45:01.260578 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28e29e43_8631_4f49_8c16_6cf7448e947c.slice/crio-99e56783200d230f4b06d30baf8fe9c7b0c605f1261bf4ee3f49699964462526 WatchSource:0}: Error finding container 99e56783200d230f4b06d30baf8fe9c7b0c605f1261bf4ee3f49699964462526: Status 404 returned error can't find the container with id 99e56783200d230f4b06d30baf8fe9c7b0c605f1261bf4ee3f49699964462526 Jan 23 17:45:02 crc kubenswrapper[4718]: I0123 17:45:02.183371 4718 generic.go:334] "Generic (PLEG): container finished" podID="28e29e43-8631-4f49-8c16-6cf7448e947c" containerID="081b1e8ae92be19e457f803181f7508f062e688c5a56024245392d0a8f25c0c3" exitCode=0 Jan 23 17:45:02 crc kubenswrapper[4718]: I0123 17:45:02.183418 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g" event={"ID":"28e29e43-8631-4f49-8c16-6cf7448e947c","Type":"ContainerDied","Data":"081b1e8ae92be19e457f803181f7508f062e688c5a56024245392d0a8f25c0c3"} Jan 23 17:45:02 crc kubenswrapper[4718]: I0123 17:45:02.183768 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g" event={"ID":"28e29e43-8631-4f49-8c16-6cf7448e947c","Type":"ContainerStarted","Data":"99e56783200d230f4b06d30baf8fe9c7b0c605f1261bf4ee3f49699964462526"} Jan 23 17:45:03 crc kubenswrapper[4718]: I0123 17:45:03.636296 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g" Jan 23 17:45:03 crc kubenswrapper[4718]: I0123 17:45:03.746105 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28e29e43-8631-4f49-8c16-6cf7448e947c-config-volume\") pod \"28e29e43-8631-4f49-8c16-6cf7448e947c\" (UID: \"28e29e43-8631-4f49-8c16-6cf7448e947c\") " Jan 23 17:45:03 crc kubenswrapper[4718]: I0123 17:45:03.746166 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjnz6\" (UniqueName: \"kubernetes.io/projected/28e29e43-8631-4f49-8c16-6cf7448e947c-kube-api-access-mjnz6\") pod \"28e29e43-8631-4f49-8c16-6cf7448e947c\" (UID: \"28e29e43-8631-4f49-8c16-6cf7448e947c\") " Jan 23 17:45:03 crc kubenswrapper[4718]: I0123 17:45:03.746259 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28e29e43-8631-4f49-8c16-6cf7448e947c-secret-volume\") pod \"28e29e43-8631-4f49-8c16-6cf7448e947c\" (UID: \"28e29e43-8631-4f49-8c16-6cf7448e947c\") " Jan 23 17:45:03 crc kubenswrapper[4718]: I0123 17:45:03.746942 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28e29e43-8631-4f49-8c16-6cf7448e947c-config-volume" (OuterVolumeSpecName: "config-volume") pod "28e29e43-8631-4f49-8c16-6cf7448e947c" (UID: "28e29e43-8631-4f49-8c16-6cf7448e947c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:45:03 crc kubenswrapper[4718]: I0123 17:45:03.752791 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28e29e43-8631-4f49-8c16-6cf7448e947c-kube-api-access-mjnz6" (OuterVolumeSpecName: "kube-api-access-mjnz6") pod "28e29e43-8631-4f49-8c16-6cf7448e947c" (UID: "28e29e43-8631-4f49-8c16-6cf7448e947c"). InnerVolumeSpecName "kube-api-access-mjnz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:45:03 crc kubenswrapper[4718]: I0123 17:45:03.753416 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e29e43-8631-4f49-8c16-6cf7448e947c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "28e29e43-8631-4f49-8c16-6cf7448e947c" (UID: "28e29e43-8631-4f49-8c16-6cf7448e947c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:45:03 crc kubenswrapper[4718]: I0123 17:45:03.849682 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28e29e43-8631-4f49-8c16-6cf7448e947c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 17:45:03 crc kubenswrapper[4718]: I0123 17:45:03.849717 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjnz6\" (UniqueName: \"kubernetes.io/projected/28e29e43-8631-4f49-8c16-6cf7448e947c-kube-api-access-mjnz6\") on node \"crc\" DevicePath \"\"" Jan 23 17:45:03 crc kubenswrapper[4718]: I0123 17:45:03.849727 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28e29e43-8631-4f49-8c16-6cf7448e947c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 17:45:04 crc kubenswrapper[4718]: I0123 17:45:04.207872 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g" event={"ID":"28e29e43-8631-4f49-8c16-6cf7448e947c","Type":"ContainerDied","Data":"99e56783200d230f4b06d30baf8fe9c7b0c605f1261bf4ee3f49699964462526"} Jan 23 17:45:04 crc kubenswrapper[4718]: I0123 17:45:04.208284 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99e56783200d230f4b06d30baf8fe9c7b0c605f1261bf4ee3f49699964462526" Jan 23 17:45:04 crc kubenswrapper[4718]: I0123 17:45:04.207928 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-kwd5g" Jan 23 17:45:04 crc kubenswrapper[4718]: I0123 17:45:04.705432 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9"] Jan 23 17:45:04 crc kubenswrapper[4718]: I0123 17:45:04.716820 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486460-kk6n9"] Jan 23 17:45:05 crc kubenswrapper[4718]: I0123 17:45:05.163870 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c19fb3d0-e348-47e4-8318-677140740104" path="/var/lib/kubelet/pods/c19fb3d0-e348-47e4-8318-677140740104/volumes" Jan 23 17:45:13 crc kubenswrapper[4718]: I0123 17:45:13.141464 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:45:13 crc kubenswrapper[4718]: E0123 17:45:13.142805 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:45:28 crc kubenswrapper[4718]: I0123 17:45:28.141487 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:45:28 crc kubenswrapper[4718]: E0123 17:45:28.142511 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:45:39 crc kubenswrapper[4718]: I0123 17:45:39.154438 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:45:39 crc kubenswrapper[4718]: E0123 17:45:39.155371 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:45:50 crc kubenswrapper[4718]: I0123 17:45:50.140298 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:45:50 crc kubenswrapper[4718]: E0123 17:45:50.141450 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:46:04 crc kubenswrapper[4718]: I0123 17:46:04.121949 4718 scope.go:117] "RemoveContainer" containerID="f1a85edb1871cf8576f9b933c3473792199113e2020f9154cd5c1f5c540f2684" Jan 23 17:46:05 crc kubenswrapper[4718]: I0123 17:46:05.141011 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:46:05 crc kubenswrapper[4718]: E0123 17:46:05.141673 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:46:19 crc kubenswrapper[4718]: I0123 17:46:19.149720 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:46:19 crc kubenswrapper[4718]: E0123 17:46:19.150931 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:46:33 crc kubenswrapper[4718]: I0123 17:46:33.140132 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:46:33 crc kubenswrapper[4718]: E0123 17:46:33.142277 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:46:47 crc kubenswrapper[4718]: I0123 17:46:47.140644 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:46:47 crc kubenswrapper[4718]: E0123 17:46:47.142541 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:46:58 crc kubenswrapper[4718]: I0123 17:46:58.141117 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:46:58 crc kubenswrapper[4718]: E0123 17:46:58.142243 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:47:10 crc kubenswrapper[4718]: I0123 17:47:10.140699 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:47:10 crc kubenswrapper[4718]: E0123 17:47:10.141511 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:47:25 crc kubenswrapper[4718]: I0123 17:47:25.141615 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:47:25 crc kubenswrapper[4718]: E0123 17:47:25.143268 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:47:38 crc kubenswrapper[4718]: I0123 17:47:38.141958 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:47:38 crc kubenswrapper[4718]: E0123 17:47:38.143030 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:47:41 crc kubenswrapper[4718]: I0123 17:47:41.165283 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zwpm5"] Jan 23 17:47:41 crc kubenswrapper[4718]: E0123 17:47:41.166416 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e29e43-8631-4f49-8c16-6cf7448e947c" containerName="collect-profiles" Jan 23 17:47:41 crc kubenswrapper[4718]: I0123 17:47:41.166432 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e29e43-8631-4f49-8c16-6cf7448e947c" containerName="collect-profiles" Jan 23 17:47:41 crc kubenswrapper[4718]: I0123 17:47:41.166782 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e29e43-8631-4f49-8c16-6cf7448e947c" containerName="collect-profiles" Jan 23 17:47:41 crc kubenswrapper[4718]: I0123 17:47:41.169016 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwpm5" Jan 23 17:47:41 crc kubenswrapper[4718]: I0123 17:47:41.179450 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwpm5"] Jan 23 17:47:41 crc kubenswrapper[4718]: I0123 17:47:41.197648 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqkvd\" (UniqueName: \"kubernetes.io/projected/c991804c-c73c-4191-8881-de19ce44d01a-kube-api-access-kqkvd\") pod \"redhat-marketplace-zwpm5\" (UID: \"c991804c-c73c-4191-8881-de19ce44d01a\") " pod="openshift-marketplace/redhat-marketplace-zwpm5" Jan 23 17:47:41 crc kubenswrapper[4718]: I0123 17:47:41.197732 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c991804c-c73c-4191-8881-de19ce44d01a-utilities\") pod \"redhat-marketplace-zwpm5\" (UID: \"c991804c-c73c-4191-8881-de19ce44d01a\") " pod="openshift-marketplace/redhat-marketplace-zwpm5" Jan 23 17:47:41 crc kubenswrapper[4718]: I0123 17:47:41.197980 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c991804c-c73c-4191-8881-de19ce44d01a-catalog-content\") pod \"redhat-marketplace-zwpm5\" (UID: \"c991804c-c73c-4191-8881-de19ce44d01a\") " pod="openshift-marketplace/redhat-marketplace-zwpm5" Jan 23 17:47:41 crc kubenswrapper[4718]: I0123 17:47:41.300372 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqkvd\" (UniqueName: \"kubernetes.io/projected/c991804c-c73c-4191-8881-de19ce44d01a-kube-api-access-kqkvd\") pod \"redhat-marketplace-zwpm5\" (UID: \"c991804c-c73c-4191-8881-de19ce44d01a\") " pod="openshift-marketplace/redhat-marketplace-zwpm5" Jan 23 17:47:41 crc kubenswrapper[4718]: I0123 17:47:41.300433 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c991804c-c73c-4191-8881-de19ce44d01a-utilities\") pod \"redhat-marketplace-zwpm5\" (UID: \"c991804c-c73c-4191-8881-de19ce44d01a\") " pod="openshift-marketplace/redhat-marketplace-zwpm5" Jan 23 17:47:41 crc kubenswrapper[4718]: I0123 17:47:41.300548 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c991804c-c73c-4191-8881-de19ce44d01a-catalog-content\") pod \"redhat-marketplace-zwpm5\" (UID: \"c991804c-c73c-4191-8881-de19ce44d01a\") " pod="openshift-marketplace/redhat-marketplace-zwpm5" Jan 23 17:47:41 crc kubenswrapper[4718]: I0123 17:47:41.301165 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c991804c-c73c-4191-8881-de19ce44d01a-catalog-content\") pod \"redhat-marketplace-zwpm5\" (UID: \"c991804c-c73c-4191-8881-de19ce44d01a\") " pod="openshift-marketplace/redhat-marketplace-zwpm5" Jan 23 17:47:41 crc kubenswrapper[4718]: I0123 17:47:41.301243 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c991804c-c73c-4191-8881-de19ce44d01a-utilities\") pod \"redhat-marketplace-zwpm5\" (UID: \"c991804c-c73c-4191-8881-de19ce44d01a\") " pod="openshift-marketplace/redhat-marketplace-zwpm5" Jan 23 17:47:41 crc kubenswrapper[4718]: I0123 17:47:41.321311 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqkvd\" (UniqueName: \"kubernetes.io/projected/c991804c-c73c-4191-8881-de19ce44d01a-kube-api-access-kqkvd\") pod \"redhat-marketplace-zwpm5\" (UID: \"c991804c-c73c-4191-8881-de19ce44d01a\") " pod="openshift-marketplace/redhat-marketplace-zwpm5" Jan 23 17:47:41 crc kubenswrapper[4718]: I0123 17:47:41.489207 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwpm5" Jan 23 17:47:41 crc kubenswrapper[4718]: I0123 17:47:41.960011 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwpm5"] Jan 23 17:47:42 crc kubenswrapper[4718]: I0123 17:47:42.985823 4718 generic.go:334] "Generic (PLEG): container finished" podID="c991804c-c73c-4191-8881-de19ce44d01a" containerID="e3ff6125114c6b158fba6b2b73611f8be4c5b63ff175d8d2b909694557c2f4c4" exitCode=0 Jan 23 17:47:42 crc kubenswrapper[4718]: I0123 17:47:42.985942 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwpm5" event={"ID":"c991804c-c73c-4191-8881-de19ce44d01a","Type":"ContainerDied","Data":"e3ff6125114c6b158fba6b2b73611f8be4c5b63ff175d8d2b909694557c2f4c4"} Jan 23 17:47:42 crc kubenswrapper[4718]: I0123 17:47:42.988058 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwpm5" event={"ID":"c991804c-c73c-4191-8881-de19ce44d01a","Type":"ContainerStarted","Data":"2889671a0dfeb9cd0215fd157ab35373aa7448ab82b594f9e542d603cfd01706"} Jan 23 17:47:42 crc kubenswrapper[4718]: I0123 17:47:42.990646 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:47:45 crc kubenswrapper[4718]: I0123 17:47:45.018398 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwpm5" event={"ID":"c991804c-c73c-4191-8881-de19ce44d01a","Type":"ContainerStarted","Data":"614c48ef2698766d29a32929de45f7aaf5902c14ba56a4f74d02ef8489a0a5e6"} Jan 23 17:47:46 crc kubenswrapper[4718]: I0123 17:47:46.032294 4718 generic.go:334] "Generic (PLEG): container finished" podID="c991804c-c73c-4191-8881-de19ce44d01a" containerID="614c48ef2698766d29a32929de45f7aaf5902c14ba56a4f74d02ef8489a0a5e6" exitCode=0 Jan 23 17:47:46 crc kubenswrapper[4718]: I0123 17:47:46.032376 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwpm5" event={"ID":"c991804c-c73c-4191-8881-de19ce44d01a","Type":"ContainerDied","Data":"614c48ef2698766d29a32929de45f7aaf5902c14ba56a4f74d02ef8489a0a5e6"} Jan 23 17:47:47 crc kubenswrapper[4718]: I0123 17:47:47.060927 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwpm5" event={"ID":"c991804c-c73c-4191-8881-de19ce44d01a","Type":"ContainerStarted","Data":"be394551d16736a0326017d8db6ccfec18978c57954277250393281835739768"} Jan 23 17:47:47 crc kubenswrapper[4718]: I0123 17:47:47.116901 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zwpm5" podStartSLOduration=3.6475060470000003 podStartE2EDuration="7.116882163s" podCreationTimestamp="2026-01-23 17:47:40 +0000 UTC" firstStartedPulling="2026-01-23 17:47:42.990279991 +0000 UTC m=+5464.137521992" lastFinishedPulling="2026-01-23 17:47:46.459656127 +0000 UTC m=+5467.606898108" observedRunningTime="2026-01-23 17:47:47.100094457 +0000 UTC m=+5468.247336448" watchObservedRunningTime="2026-01-23 17:47:47.116882163 +0000 UTC m=+5468.264124154" Jan 23 17:47:50 crc kubenswrapper[4718]: I0123 17:47:50.139718 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:47:50 crc kubenswrapper[4718]: E0123 17:47:50.140842 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:47:51 crc kubenswrapper[4718]: I0123 17:47:51.490039 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zwpm5" Jan 23 17:47:51 crc kubenswrapper[4718]: I0123 17:47:51.490417 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zwpm5" Jan 23 17:47:51 crc kubenswrapper[4718]: I0123 17:47:51.540781 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zwpm5" Jan 23 17:47:52 crc kubenswrapper[4718]: I0123 17:47:52.882889 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zwpm5" Jan 23 17:47:52 crc kubenswrapper[4718]: I0123 17:47:52.957244 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwpm5"] Jan 23 17:47:54 crc kubenswrapper[4718]: I0123 17:47:54.129339 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zwpm5" podUID="c991804c-c73c-4191-8881-de19ce44d01a" containerName="registry-server" containerID="cri-o://be394551d16736a0326017d8db6ccfec18978c57954277250393281835739768" gracePeriod=2 Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.122839 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwpm5" Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.144585 4718 generic.go:334] "Generic (PLEG): container finished" podID="c991804c-c73c-4191-8881-de19ce44d01a" containerID="be394551d16736a0326017d8db6ccfec18978c57954277250393281835739768" exitCode=0 Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.144727 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwpm5" Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.159153 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwpm5" event={"ID":"c991804c-c73c-4191-8881-de19ce44d01a","Type":"ContainerDied","Data":"be394551d16736a0326017d8db6ccfec18978c57954277250393281835739768"} Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.159205 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwpm5" event={"ID":"c991804c-c73c-4191-8881-de19ce44d01a","Type":"ContainerDied","Data":"2889671a0dfeb9cd0215fd157ab35373aa7448ab82b594f9e542d603cfd01706"} Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.159224 4718 scope.go:117] "RemoveContainer" containerID="be394551d16736a0326017d8db6ccfec18978c57954277250393281835739768" Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.194762 4718 scope.go:117] "RemoveContainer" containerID="614c48ef2698766d29a32929de45f7aaf5902c14ba56a4f74d02ef8489a0a5e6" Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.221445 4718 scope.go:117] "RemoveContainer" containerID="e3ff6125114c6b158fba6b2b73611f8be4c5b63ff175d8d2b909694557c2f4c4" Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.258424 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqkvd\" (UniqueName: \"kubernetes.io/projected/c991804c-c73c-4191-8881-de19ce44d01a-kube-api-access-kqkvd\") pod \"c991804c-c73c-4191-8881-de19ce44d01a\" (UID: \"c991804c-c73c-4191-8881-de19ce44d01a\") " Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.258744 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c991804c-c73c-4191-8881-de19ce44d01a-utilities\") pod \"c991804c-c73c-4191-8881-de19ce44d01a\" (UID: \"c991804c-c73c-4191-8881-de19ce44d01a\") " Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.258846 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c991804c-c73c-4191-8881-de19ce44d01a-catalog-content\") pod \"c991804c-c73c-4191-8881-de19ce44d01a\" (UID: \"c991804c-c73c-4191-8881-de19ce44d01a\") " Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.260669 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c991804c-c73c-4191-8881-de19ce44d01a-utilities" (OuterVolumeSpecName: "utilities") pod "c991804c-c73c-4191-8881-de19ce44d01a" (UID: "c991804c-c73c-4191-8881-de19ce44d01a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.270837 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c991804c-c73c-4191-8881-de19ce44d01a-kube-api-access-kqkvd" (OuterVolumeSpecName: "kube-api-access-kqkvd") pod "c991804c-c73c-4191-8881-de19ce44d01a" (UID: "c991804c-c73c-4191-8881-de19ce44d01a"). InnerVolumeSpecName "kube-api-access-kqkvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.290934 4718 scope.go:117] "RemoveContainer" containerID="be394551d16736a0326017d8db6ccfec18978c57954277250393281835739768" Jan 23 17:47:55 crc kubenswrapper[4718]: E0123 17:47:55.291644 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be394551d16736a0326017d8db6ccfec18978c57954277250393281835739768\": container with ID starting with be394551d16736a0326017d8db6ccfec18978c57954277250393281835739768 not found: ID does not exist" containerID="be394551d16736a0326017d8db6ccfec18978c57954277250393281835739768" Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.291682 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be394551d16736a0326017d8db6ccfec18978c57954277250393281835739768"} err="failed to get container status \"be394551d16736a0326017d8db6ccfec18978c57954277250393281835739768\": rpc error: code = NotFound desc = could not find container \"be394551d16736a0326017d8db6ccfec18978c57954277250393281835739768\": container with ID starting with be394551d16736a0326017d8db6ccfec18978c57954277250393281835739768 not found: ID does not exist" Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.291725 4718 scope.go:117] "RemoveContainer" containerID="614c48ef2698766d29a32929de45f7aaf5902c14ba56a4f74d02ef8489a0a5e6" Jan 23 17:47:55 crc kubenswrapper[4718]: E0123 17:47:55.291960 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"614c48ef2698766d29a32929de45f7aaf5902c14ba56a4f74d02ef8489a0a5e6\": container with ID starting with 614c48ef2698766d29a32929de45f7aaf5902c14ba56a4f74d02ef8489a0a5e6 not found: ID does not exist" containerID="614c48ef2698766d29a32929de45f7aaf5902c14ba56a4f74d02ef8489a0a5e6" Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.291989 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"614c48ef2698766d29a32929de45f7aaf5902c14ba56a4f74d02ef8489a0a5e6"} err="failed to get container status \"614c48ef2698766d29a32929de45f7aaf5902c14ba56a4f74d02ef8489a0a5e6\": rpc error: code = NotFound desc = could not find container \"614c48ef2698766d29a32929de45f7aaf5902c14ba56a4f74d02ef8489a0a5e6\": container with ID starting with 614c48ef2698766d29a32929de45f7aaf5902c14ba56a4f74d02ef8489a0a5e6 not found: ID does not exist" Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.292009 4718 scope.go:117] "RemoveContainer" containerID="e3ff6125114c6b158fba6b2b73611f8be4c5b63ff175d8d2b909694557c2f4c4" Jan 23 17:47:55 crc kubenswrapper[4718]: E0123 17:47:55.292265 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3ff6125114c6b158fba6b2b73611f8be4c5b63ff175d8d2b909694557c2f4c4\": container with ID starting with e3ff6125114c6b158fba6b2b73611f8be4c5b63ff175d8d2b909694557c2f4c4 not found: ID does not exist" containerID="e3ff6125114c6b158fba6b2b73611f8be4c5b63ff175d8d2b909694557c2f4c4" Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.292305 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3ff6125114c6b158fba6b2b73611f8be4c5b63ff175d8d2b909694557c2f4c4"} err="failed to get container status \"e3ff6125114c6b158fba6b2b73611f8be4c5b63ff175d8d2b909694557c2f4c4\": rpc error: code = NotFound desc = could not find container \"e3ff6125114c6b158fba6b2b73611f8be4c5b63ff175d8d2b909694557c2f4c4\": container with ID starting with e3ff6125114c6b158fba6b2b73611f8be4c5b63ff175d8d2b909694557c2f4c4 not found: ID does not exist" Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.294852 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c991804c-c73c-4191-8881-de19ce44d01a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c991804c-c73c-4191-8881-de19ce44d01a" (UID: "c991804c-c73c-4191-8881-de19ce44d01a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.363415 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c991804c-c73c-4191-8881-de19ce44d01a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.363466 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c991804c-c73c-4191-8881-de19ce44d01a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.363489 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqkvd\" (UniqueName: \"kubernetes.io/projected/c991804c-c73c-4191-8881-de19ce44d01a-kube-api-access-kqkvd\") on node \"crc\" DevicePath \"\"" Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.490950 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwpm5"] Jan 23 17:47:55 crc kubenswrapper[4718]: I0123 17:47:55.502904 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwpm5"] Jan 23 17:47:57 crc kubenswrapper[4718]: I0123 17:47:57.153653 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c991804c-c73c-4191-8881-de19ce44d01a" path="/var/lib/kubelet/pods/c991804c-c73c-4191-8881-de19ce44d01a/volumes" Jan 23 17:48:01 crc kubenswrapper[4718]: I0123 17:48:01.141085 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:48:01 crc kubenswrapper[4718]: E0123 17:48:01.142115 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:48:15 crc kubenswrapper[4718]: I0123 17:48:15.141169 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:48:15 crc kubenswrapper[4718]: E0123 17:48:15.142034 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:48:28 crc kubenswrapper[4718]: I0123 17:48:28.141503 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:48:28 crc kubenswrapper[4718]: E0123 17:48:28.142365 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:48:41 crc kubenswrapper[4718]: I0123 17:48:41.141624 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:48:41 crc kubenswrapper[4718]: E0123 17:48:41.142562 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:48:53 crc kubenswrapper[4718]: I0123 17:48:53.140468 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:48:53 crc kubenswrapper[4718]: E0123 17:48:53.141381 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:49:08 crc kubenswrapper[4718]: I0123 17:49:08.141183 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:49:08 crc kubenswrapper[4718]: E0123 17:49:08.142122 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:49:21 crc kubenswrapper[4718]: I0123 17:49:21.140858 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:49:21 crc kubenswrapper[4718]: E0123 17:49:21.141809 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.026286 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9vxhn"] Jan 23 17:49:33 crc kubenswrapper[4718]: E0123 17:49:33.027405 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c991804c-c73c-4191-8881-de19ce44d01a" containerName="extract-content" Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.027423 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c991804c-c73c-4191-8881-de19ce44d01a" containerName="extract-content" Jan 23 17:49:33 crc kubenswrapper[4718]: E0123 17:49:33.027454 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c991804c-c73c-4191-8881-de19ce44d01a" containerName="extract-utilities" Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.027462 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c991804c-c73c-4191-8881-de19ce44d01a" containerName="extract-utilities" Jan 23 17:49:33 crc kubenswrapper[4718]: E0123 17:49:33.027482 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c991804c-c73c-4191-8881-de19ce44d01a" containerName="registry-server" Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.027489 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c991804c-c73c-4191-8881-de19ce44d01a" containerName="registry-server" Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.027822 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c991804c-c73c-4191-8881-de19ce44d01a" containerName="registry-server" Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.029928 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vxhn" Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.037467 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9vxhn"] Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.090446 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df200f7-69b5-4e14-b55d-5cb40c103eeb-catalog-content\") pod \"certified-operators-9vxhn\" (UID: \"9df200f7-69b5-4e14-b55d-5cb40c103eeb\") " pod="openshift-marketplace/certified-operators-9vxhn" Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.090657 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm8f2\" (UniqueName: \"kubernetes.io/projected/9df200f7-69b5-4e14-b55d-5cb40c103eeb-kube-api-access-sm8f2\") pod \"certified-operators-9vxhn\" (UID: \"9df200f7-69b5-4e14-b55d-5cb40c103eeb\") " pod="openshift-marketplace/certified-operators-9vxhn" Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.090770 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df200f7-69b5-4e14-b55d-5cb40c103eeb-utilities\") pod \"certified-operators-9vxhn\" (UID: \"9df200f7-69b5-4e14-b55d-5cb40c103eeb\") " pod="openshift-marketplace/certified-operators-9vxhn" Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.140107 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:49:33 crc kubenswrapper[4718]: E0123 17:49:33.140489 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.195088 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df200f7-69b5-4e14-b55d-5cb40c103eeb-catalog-content\") pod \"certified-operators-9vxhn\" (UID: \"9df200f7-69b5-4e14-b55d-5cb40c103eeb\") " pod="openshift-marketplace/certified-operators-9vxhn" Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.195540 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df200f7-69b5-4e14-b55d-5cb40c103eeb-catalog-content\") pod \"certified-operators-9vxhn\" (UID: \"9df200f7-69b5-4e14-b55d-5cb40c103eeb\") " pod="openshift-marketplace/certified-operators-9vxhn" Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.196886 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm8f2\" (UniqueName: \"kubernetes.io/projected/9df200f7-69b5-4e14-b55d-5cb40c103eeb-kube-api-access-sm8f2\") pod \"certified-operators-9vxhn\" (UID: \"9df200f7-69b5-4e14-b55d-5cb40c103eeb\") " pod="openshift-marketplace/certified-operators-9vxhn" Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.196996 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df200f7-69b5-4e14-b55d-5cb40c103eeb-utilities\") pod \"certified-operators-9vxhn\" (UID: \"9df200f7-69b5-4e14-b55d-5cb40c103eeb\") " pod="openshift-marketplace/certified-operators-9vxhn" Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.197380 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df200f7-69b5-4e14-b55d-5cb40c103eeb-utilities\") pod \"certified-operators-9vxhn\" (UID: \"9df200f7-69b5-4e14-b55d-5cb40c103eeb\") " pod="openshift-marketplace/certified-operators-9vxhn" Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.231534 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm8f2\" (UniqueName: \"kubernetes.io/projected/9df200f7-69b5-4e14-b55d-5cb40c103eeb-kube-api-access-sm8f2\") pod \"certified-operators-9vxhn\" (UID: \"9df200f7-69b5-4e14-b55d-5cb40c103eeb\") " pod="openshift-marketplace/certified-operators-9vxhn" Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.353347 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vxhn" Jan 23 17:49:33 crc kubenswrapper[4718]: I0123 17:49:33.862564 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9vxhn"] Jan 23 17:49:34 crc kubenswrapper[4718]: I0123 17:49:34.292100 4718 generic.go:334] "Generic (PLEG): container finished" podID="9df200f7-69b5-4e14-b55d-5cb40c103eeb" containerID="5960b3cefb6d8ce32f8ca5df5e54f70b9c664dc67ff2061411172fea73d32908" exitCode=0 Jan 23 17:49:34 crc kubenswrapper[4718]: I0123 17:49:34.292182 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vxhn" event={"ID":"9df200f7-69b5-4e14-b55d-5cb40c103eeb","Type":"ContainerDied","Data":"5960b3cefb6d8ce32f8ca5df5e54f70b9c664dc67ff2061411172fea73d32908"} Jan 23 17:49:34 crc kubenswrapper[4718]: I0123 17:49:34.292484 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vxhn" event={"ID":"9df200f7-69b5-4e14-b55d-5cb40c103eeb","Type":"ContainerStarted","Data":"c041b5c493958f8bd1e0fe66dd227225ce57ac3c22988194de4c34781544d349"} Jan 23 17:49:36 crc kubenswrapper[4718]: I0123 17:49:36.318525 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vxhn" event={"ID":"9df200f7-69b5-4e14-b55d-5cb40c103eeb","Type":"ContainerStarted","Data":"feb2f33ecdac29b569311d420ad4d4ae91adc1bddf4b4c8b5cc9e978da624ac5"} Jan 23 17:49:37 crc kubenswrapper[4718]: I0123 17:49:37.332124 4718 generic.go:334] "Generic (PLEG): container finished" podID="9df200f7-69b5-4e14-b55d-5cb40c103eeb" containerID="feb2f33ecdac29b569311d420ad4d4ae91adc1bddf4b4c8b5cc9e978da624ac5" exitCode=0 Jan 23 17:49:37 crc kubenswrapper[4718]: I0123 17:49:37.332217 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vxhn" event={"ID":"9df200f7-69b5-4e14-b55d-5cb40c103eeb","Type":"ContainerDied","Data":"feb2f33ecdac29b569311d420ad4d4ae91adc1bddf4b4c8b5cc9e978da624ac5"} Jan 23 17:49:38 crc kubenswrapper[4718]: I0123 17:49:38.348438 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vxhn" event={"ID":"9df200f7-69b5-4e14-b55d-5cb40c103eeb","Type":"ContainerStarted","Data":"1c78bc88a34262257e052d50dce44b973129864b78d31110a111733dfdc4ce93"} Jan 23 17:49:38 crc kubenswrapper[4718]: I0123 17:49:38.378052 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9vxhn" podStartSLOduration=1.77525263 podStartE2EDuration="5.37803318s" podCreationTimestamp="2026-01-23 17:49:33 +0000 UTC" firstStartedPulling="2026-01-23 17:49:34.295078835 +0000 UTC m=+5575.442320826" lastFinishedPulling="2026-01-23 17:49:37.897859375 +0000 UTC m=+5579.045101376" observedRunningTime="2026-01-23 17:49:38.367551955 +0000 UTC m=+5579.514793986" watchObservedRunningTime="2026-01-23 17:49:38.37803318 +0000 UTC m=+5579.525275171" Jan 23 17:49:43 crc kubenswrapper[4718]: I0123 17:49:43.354462 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9vxhn" Jan 23 17:49:43 crc kubenswrapper[4718]: I0123 17:49:43.355017 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9vxhn" Jan 23 17:49:43 crc kubenswrapper[4718]: I0123 17:49:43.426466 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9vxhn" Jan 23 17:49:43 crc kubenswrapper[4718]: I0123 17:49:43.495817 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9vxhn" Jan 23 17:49:43 crc kubenswrapper[4718]: I0123 17:49:43.677866 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9vxhn"] Jan 23 17:49:45 crc kubenswrapper[4718]: I0123 17:49:45.427906 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9vxhn" podUID="9df200f7-69b5-4e14-b55d-5cb40c103eeb" containerName="registry-server" containerID="cri-o://1c78bc88a34262257e052d50dce44b973129864b78d31110a111733dfdc4ce93" gracePeriod=2 Jan 23 17:49:45 crc kubenswrapper[4718]: I0123 17:49:45.951754 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vxhn" Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.146907 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df200f7-69b5-4e14-b55d-5cb40c103eeb-catalog-content\") pod \"9df200f7-69b5-4e14-b55d-5cb40c103eeb\" (UID: \"9df200f7-69b5-4e14-b55d-5cb40c103eeb\") " Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.147117 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm8f2\" (UniqueName: \"kubernetes.io/projected/9df200f7-69b5-4e14-b55d-5cb40c103eeb-kube-api-access-sm8f2\") pod \"9df200f7-69b5-4e14-b55d-5cb40c103eeb\" (UID: \"9df200f7-69b5-4e14-b55d-5cb40c103eeb\") " Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.147306 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df200f7-69b5-4e14-b55d-5cb40c103eeb-utilities\") pod \"9df200f7-69b5-4e14-b55d-5cb40c103eeb\" (UID: \"9df200f7-69b5-4e14-b55d-5cb40c103eeb\") " Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.148030 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df200f7-69b5-4e14-b55d-5cb40c103eeb-utilities" (OuterVolumeSpecName: "utilities") pod "9df200f7-69b5-4e14-b55d-5cb40c103eeb" (UID: "9df200f7-69b5-4e14-b55d-5cb40c103eeb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.153282 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9df200f7-69b5-4e14-b55d-5cb40c103eeb-kube-api-access-sm8f2" (OuterVolumeSpecName: "kube-api-access-sm8f2") pod "9df200f7-69b5-4e14-b55d-5cb40c103eeb" (UID: "9df200f7-69b5-4e14-b55d-5cb40c103eeb"). InnerVolumeSpecName "kube-api-access-sm8f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.191825 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df200f7-69b5-4e14-b55d-5cb40c103eeb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9df200f7-69b5-4e14-b55d-5cb40c103eeb" (UID: "9df200f7-69b5-4e14-b55d-5cb40c103eeb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.250379 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm8f2\" (UniqueName: \"kubernetes.io/projected/9df200f7-69b5-4e14-b55d-5cb40c103eeb-kube-api-access-sm8f2\") on node \"crc\" DevicePath \"\"" Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.250416 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df200f7-69b5-4e14-b55d-5cb40c103eeb-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.250427 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df200f7-69b5-4e14-b55d-5cb40c103eeb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.439676 4718 generic.go:334] "Generic (PLEG): container finished" podID="9df200f7-69b5-4e14-b55d-5cb40c103eeb" containerID="1c78bc88a34262257e052d50dce44b973129864b78d31110a111733dfdc4ce93" exitCode=0 Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.439723 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vxhn" event={"ID":"9df200f7-69b5-4e14-b55d-5cb40c103eeb","Type":"ContainerDied","Data":"1c78bc88a34262257e052d50dce44b973129864b78d31110a111733dfdc4ce93"} Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.439762 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vxhn" event={"ID":"9df200f7-69b5-4e14-b55d-5cb40c103eeb","Type":"ContainerDied","Data":"c041b5c493958f8bd1e0fe66dd227225ce57ac3c22988194de4c34781544d349"} Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.439780 4718 scope.go:117] "RemoveContainer" containerID="1c78bc88a34262257e052d50dce44b973129864b78d31110a111733dfdc4ce93" Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.439796 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vxhn" Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.471590 4718 scope.go:117] "RemoveContainer" containerID="feb2f33ecdac29b569311d420ad4d4ae91adc1bddf4b4c8b5cc9e978da624ac5" Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.493767 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9vxhn"] Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.504390 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9vxhn"] Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.523738 4718 scope.go:117] "RemoveContainer" containerID="5960b3cefb6d8ce32f8ca5df5e54f70b9c664dc67ff2061411172fea73d32908" Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.557836 4718 scope.go:117] "RemoveContainer" containerID="1c78bc88a34262257e052d50dce44b973129864b78d31110a111733dfdc4ce93" Jan 23 17:49:46 crc kubenswrapper[4718]: E0123 17:49:46.558337 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c78bc88a34262257e052d50dce44b973129864b78d31110a111733dfdc4ce93\": container with ID starting with 1c78bc88a34262257e052d50dce44b973129864b78d31110a111733dfdc4ce93 not found: ID does not exist" containerID="1c78bc88a34262257e052d50dce44b973129864b78d31110a111733dfdc4ce93" Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.558372 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c78bc88a34262257e052d50dce44b973129864b78d31110a111733dfdc4ce93"} err="failed to get container status \"1c78bc88a34262257e052d50dce44b973129864b78d31110a111733dfdc4ce93\": rpc error: code = NotFound desc = could not find container \"1c78bc88a34262257e052d50dce44b973129864b78d31110a111733dfdc4ce93\": container with ID starting with 1c78bc88a34262257e052d50dce44b973129864b78d31110a111733dfdc4ce93 not found: ID does not exist" Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.558399 4718 scope.go:117] "RemoveContainer" containerID="feb2f33ecdac29b569311d420ad4d4ae91adc1bddf4b4c8b5cc9e978da624ac5" Jan 23 17:49:46 crc kubenswrapper[4718]: E0123 17:49:46.558667 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb2f33ecdac29b569311d420ad4d4ae91adc1bddf4b4c8b5cc9e978da624ac5\": container with ID starting with feb2f33ecdac29b569311d420ad4d4ae91adc1bddf4b4c8b5cc9e978da624ac5 not found: ID does not exist" containerID="feb2f33ecdac29b569311d420ad4d4ae91adc1bddf4b4c8b5cc9e978da624ac5" Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.558702 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb2f33ecdac29b569311d420ad4d4ae91adc1bddf4b4c8b5cc9e978da624ac5"} err="failed to get container status \"feb2f33ecdac29b569311d420ad4d4ae91adc1bddf4b4c8b5cc9e978da624ac5\": rpc error: code = NotFound desc = could not find container \"feb2f33ecdac29b569311d420ad4d4ae91adc1bddf4b4c8b5cc9e978da624ac5\": container with ID starting with feb2f33ecdac29b569311d420ad4d4ae91adc1bddf4b4c8b5cc9e978da624ac5 not found: ID does not exist" Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.558726 4718 scope.go:117] "RemoveContainer" containerID="5960b3cefb6d8ce32f8ca5df5e54f70b9c664dc67ff2061411172fea73d32908" Jan 23 17:49:46 crc kubenswrapper[4718]: E0123 17:49:46.558961 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5960b3cefb6d8ce32f8ca5df5e54f70b9c664dc67ff2061411172fea73d32908\": container with ID starting with 5960b3cefb6d8ce32f8ca5df5e54f70b9c664dc67ff2061411172fea73d32908 not found: ID does not exist" containerID="5960b3cefb6d8ce32f8ca5df5e54f70b9c664dc67ff2061411172fea73d32908" Jan 23 17:49:46 crc kubenswrapper[4718]: I0123 17:49:46.558994 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5960b3cefb6d8ce32f8ca5df5e54f70b9c664dc67ff2061411172fea73d32908"} err="failed to get container status \"5960b3cefb6d8ce32f8ca5df5e54f70b9c664dc67ff2061411172fea73d32908\": rpc error: code = NotFound desc = could not find container \"5960b3cefb6d8ce32f8ca5df5e54f70b9c664dc67ff2061411172fea73d32908\": container with ID starting with 5960b3cefb6d8ce32f8ca5df5e54f70b9c664dc67ff2061411172fea73d32908 not found: ID does not exist" Jan 23 17:49:47 crc kubenswrapper[4718]: I0123 17:49:47.141038 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:49:47 crc kubenswrapper[4718]: E0123 17:49:47.141645 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:49:47 crc kubenswrapper[4718]: I0123 17:49:47.158686 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9df200f7-69b5-4e14-b55d-5cb40c103eeb" path="/var/lib/kubelet/pods/9df200f7-69b5-4e14-b55d-5cb40c103eeb/volumes" Jan 23 17:49:58 crc kubenswrapper[4718]: I0123 17:49:58.141270 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:49:58 crc kubenswrapper[4718]: E0123 17:49:58.142250 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:50:10 crc kubenswrapper[4718]: I0123 17:50:10.142883 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:50:10 crc kubenswrapper[4718]: I0123 17:50:10.735315 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"b5fb94ce317661475574425319434743ab7a9ca05eb9dd4d117ff3e41f5b6647"} Jan 23 17:50:24 crc kubenswrapper[4718]: I0123 17:50:24.544774 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="91543550-f764-468a-a1e1-980e3d08aa41" containerName="galera" probeResult="failure" output="command timed out" Jan 23 17:52:28 crc kubenswrapper[4718]: I0123 17:52:28.875706 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:52:28 crc kubenswrapper[4718]: I0123 17:52:28.876389 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:52:29 crc kubenswrapper[4718]: I0123 17:52:29.841229 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-44d8h"] Jan 23 17:52:29 crc kubenswrapper[4718]: E0123 17:52:29.842212 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df200f7-69b5-4e14-b55d-5cb40c103eeb" containerName="extract-utilities" Jan 23 17:52:29 crc kubenswrapper[4718]: I0123 17:52:29.842235 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df200f7-69b5-4e14-b55d-5cb40c103eeb" containerName="extract-utilities" Jan 23 17:52:29 crc kubenswrapper[4718]: E0123 17:52:29.842276 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df200f7-69b5-4e14-b55d-5cb40c103eeb" containerName="extract-content" Jan 23 17:52:29 crc kubenswrapper[4718]: I0123 17:52:29.842286 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df200f7-69b5-4e14-b55d-5cb40c103eeb" containerName="extract-content" Jan 23 17:52:29 crc kubenswrapper[4718]: E0123 17:52:29.842315 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df200f7-69b5-4e14-b55d-5cb40c103eeb" containerName="registry-server" Jan 23 17:52:29 crc kubenswrapper[4718]: I0123 17:52:29.842322 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df200f7-69b5-4e14-b55d-5cb40c103eeb" containerName="registry-server" Jan 23 17:52:29 crc kubenswrapper[4718]: I0123 17:52:29.842596 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df200f7-69b5-4e14-b55d-5cb40c103eeb" containerName="registry-server" Jan 23 17:52:29 crc kubenswrapper[4718]: I0123 17:52:29.846103 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-44d8h" Jan 23 17:52:29 crc kubenswrapper[4718]: I0123 17:52:29.873285 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-44d8h"] Jan 23 17:52:29 crc kubenswrapper[4718]: I0123 17:52:29.963395 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sbbc\" (UniqueName: \"kubernetes.io/projected/ac9e7735-6d19-46f6-b387-147874311d92-kube-api-access-2sbbc\") pod \"redhat-operators-44d8h\" (UID: \"ac9e7735-6d19-46f6-b387-147874311d92\") " pod="openshift-marketplace/redhat-operators-44d8h" Jan 23 17:52:29 crc kubenswrapper[4718]: I0123 17:52:29.964260 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac9e7735-6d19-46f6-b387-147874311d92-utilities\") pod \"redhat-operators-44d8h\" (UID: \"ac9e7735-6d19-46f6-b387-147874311d92\") " pod="openshift-marketplace/redhat-operators-44d8h" Jan 23 17:52:29 crc kubenswrapper[4718]: I0123 17:52:29.964315 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac9e7735-6d19-46f6-b387-147874311d92-catalog-content\") pod \"redhat-operators-44d8h\" (UID: \"ac9e7735-6d19-46f6-b387-147874311d92\") " pod="openshift-marketplace/redhat-operators-44d8h" Jan 23 17:52:30 crc kubenswrapper[4718]: I0123 17:52:30.066215 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac9e7735-6d19-46f6-b387-147874311d92-utilities\") pod \"redhat-operators-44d8h\" (UID: \"ac9e7735-6d19-46f6-b387-147874311d92\") " pod="openshift-marketplace/redhat-operators-44d8h" Jan 23 17:52:30 crc kubenswrapper[4718]: I0123 17:52:30.066272 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac9e7735-6d19-46f6-b387-147874311d92-catalog-content\") pod \"redhat-operators-44d8h\" (UID: \"ac9e7735-6d19-46f6-b387-147874311d92\") " pod="openshift-marketplace/redhat-operators-44d8h" Jan 23 17:52:30 crc kubenswrapper[4718]: I0123 17:52:30.066361 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sbbc\" (UniqueName: \"kubernetes.io/projected/ac9e7735-6d19-46f6-b387-147874311d92-kube-api-access-2sbbc\") pod \"redhat-operators-44d8h\" (UID: \"ac9e7735-6d19-46f6-b387-147874311d92\") " pod="openshift-marketplace/redhat-operators-44d8h" Jan 23 17:52:30 crc kubenswrapper[4718]: I0123 17:52:30.066764 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac9e7735-6d19-46f6-b387-147874311d92-utilities\") pod \"redhat-operators-44d8h\" (UID: \"ac9e7735-6d19-46f6-b387-147874311d92\") " pod="openshift-marketplace/redhat-operators-44d8h" Jan 23 17:52:30 crc kubenswrapper[4718]: I0123 17:52:30.066834 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac9e7735-6d19-46f6-b387-147874311d92-catalog-content\") pod \"redhat-operators-44d8h\" (UID: \"ac9e7735-6d19-46f6-b387-147874311d92\") " pod="openshift-marketplace/redhat-operators-44d8h" Jan 23 17:52:30 crc kubenswrapper[4718]: I0123 17:52:30.089524 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sbbc\" (UniqueName: \"kubernetes.io/projected/ac9e7735-6d19-46f6-b387-147874311d92-kube-api-access-2sbbc\") pod \"redhat-operators-44d8h\" (UID: \"ac9e7735-6d19-46f6-b387-147874311d92\") " pod="openshift-marketplace/redhat-operators-44d8h" Jan 23 17:52:30 crc kubenswrapper[4718]: I0123 17:52:30.174393 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-44d8h" Jan 23 17:52:30 crc kubenswrapper[4718]: I0123 17:52:30.705911 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-44d8h"] Jan 23 17:52:31 crc kubenswrapper[4718]: I0123 17:52:31.250553 4718 generic.go:334] "Generic (PLEG): container finished" podID="ac9e7735-6d19-46f6-b387-147874311d92" containerID="47b2995550788dfe9134ee8b596cae3271cc1ec79e164f04b6dd596e91aa11bb" exitCode=0 Jan 23 17:52:31 crc kubenswrapper[4718]: I0123 17:52:31.250759 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44d8h" event={"ID":"ac9e7735-6d19-46f6-b387-147874311d92","Type":"ContainerDied","Data":"47b2995550788dfe9134ee8b596cae3271cc1ec79e164f04b6dd596e91aa11bb"} Jan 23 17:52:31 crc kubenswrapper[4718]: I0123 17:52:31.250905 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44d8h" event={"ID":"ac9e7735-6d19-46f6-b387-147874311d92","Type":"ContainerStarted","Data":"912b9f1c63cebc52037ce3e274027bd015bf4dddef5a244bd2f5beb29819769e"} Jan 23 17:52:33 crc kubenswrapper[4718]: I0123 17:52:33.280522 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44d8h" event={"ID":"ac9e7735-6d19-46f6-b387-147874311d92","Type":"ContainerStarted","Data":"ccb90a14df93eb8b1b56d80748260a7282e985dc774346ee02b61f45a1702c56"} Jan 23 17:52:36 crc kubenswrapper[4718]: I0123 17:52:36.310184 4718 generic.go:334] "Generic (PLEG): container finished" podID="ac9e7735-6d19-46f6-b387-147874311d92" containerID="ccb90a14df93eb8b1b56d80748260a7282e985dc774346ee02b61f45a1702c56" exitCode=0 Jan 23 17:52:36 crc kubenswrapper[4718]: I0123 17:52:36.310266 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44d8h" event={"ID":"ac9e7735-6d19-46f6-b387-147874311d92","Type":"ContainerDied","Data":"ccb90a14df93eb8b1b56d80748260a7282e985dc774346ee02b61f45a1702c56"} Jan 23 17:52:37 crc kubenswrapper[4718]: I0123 17:52:37.326821 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44d8h" event={"ID":"ac9e7735-6d19-46f6-b387-147874311d92","Type":"ContainerStarted","Data":"12f4d65c88131225625af81e7744e9c4b52f93570c9f5b93a6f7343342143cf5"} Jan 23 17:52:37 crc kubenswrapper[4718]: I0123 17:52:37.365930 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-44d8h" podStartSLOduration=2.716490617 podStartE2EDuration="8.365910261s" podCreationTimestamp="2026-01-23 17:52:29 +0000 UTC" firstStartedPulling="2026-01-23 17:52:31.252521781 +0000 UTC m=+5752.399763772" lastFinishedPulling="2026-01-23 17:52:36.901941425 +0000 UTC m=+5758.049183416" observedRunningTime="2026-01-23 17:52:37.351937811 +0000 UTC m=+5758.499179822" watchObservedRunningTime="2026-01-23 17:52:37.365910261 +0000 UTC m=+5758.513152242" Jan 23 17:52:40 crc kubenswrapper[4718]: I0123 17:52:40.174802 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-44d8h" Jan 23 17:52:40 crc kubenswrapper[4718]: I0123 17:52:40.175353 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-44d8h" Jan 23 17:52:41 crc kubenswrapper[4718]: I0123 17:52:41.222227 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-44d8h" podUID="ac9e7735-6d19-46f6-b387-147874311d92" containerName="registry-server" probeResult="failure" output=< Jan 23 17:52:41 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Jan 23 17:52:41 crc kubenswrapper[4718]: > Jan 23 17:52:50 crc kubenswrapper[4718]: I0123 17:52:50.228873 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-44d8h" Jan 23 17:52:50 crc kubenswrapper[4718]: I0123 17:52:50.279377 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-44d8h" Jan 23 17:52:50 crc kubenswrapper[4718]: I0123 17:52:50.475471 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-44d8h"] Jan 23 17:52:51 crc kubenswrapper[4718]: I0123 17:52:51.500409 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-44d8h" podUID="ac9e7735-6d19-46f6-b387-147874311d92" containerName="registry-server" containerID="cri-o://12f4d65c88131225625af81e7744e9c4b52f93570c9f5b93a6f7343342143cf5" gracePeriod=2 Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.152191 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-44d8h" Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.211360 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac9e7735-6d19-46f6-b387-147874311d92-utilities\") pod \"ac9e7735-6d19-46f6-b387-147874311d92\" (UID: \"ac9e7735-6d19-46f6-b387-147874311d92\") " Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.211736 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac9e7735-6d19-46f6-b387-147874311d92-catalog-content\") pod \"ac9e7735-6d19-46f6-b387-147874311d92\" (UID: \"ac9e7735-6d19-46f6-b387-147874311d92\") " Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.211887 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sbbc\" (UniqueName: \"kubernetes.io/projected/ac9e7735-6d19-46f6-b387-147874311d92-kube-api-access-2sbbc\") pod \"ac9e7735-6d19-46f6-b387-147874311d92\" (UID: \"ac9e7735-6d19-46f6-b387-147874311d92\") " Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.212328 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac9e7735-6d19-46f6-b387-147874311d92-utilities" (OuterVolumeSpecName: "utilities") pod "ac9e7735-6d19-46f6-b387-147874311d92" (UID: "ac9e7735-6d19-46f6-b387-147874311d92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.213064 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac9e7735-6d19-46f6-b387-147874311d92-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.218424 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac9e7735-6d19-46f6-b387-147874311d92-kube-api-access-2sbbc" (OuterVolumeSpecName: "kube-api-access-2sbbc") pod "ac9e7735-6d19-46f6-b387-147874311d92" (UID: "ac9e7735-6d19-46f6-b387-147874311d92"). InnerVolumeSpecName "kube-api-access-2sbbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.315335 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sbbc\" (UniqueName: \"kubernetes.io/projected/ac9e7735-6d19-46f6-b387-147874311d92-kube-api-access-2sbbc\") on node \"crc\" DevicePath \"\"" Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.328248 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac9e7735-6d19-46f6-b387-147874311d92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac9e7735-6d19-46f6-b387-147874311d92" (UID: "ac9e7735-6d19-46f6-b387-147874311d92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.418320 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac9e7735-6d19-46f6-b387-147874311d92-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.514773 4718 generic.go:334] "Generic (PLEG): container finished" podID="ac9e7735-6d19-46f6-b387-147874311d92" containerID="12f4d65c88131225625af81e7744e9c4b52f93570c9f5b93a6f7343342143cf5" exitCode=0 Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.514852 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-44d8h" Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.514838 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44d8h" event={"ID":"ac9e7735-6d19-46f6-b387-147874311d92","Type":"ContainerDied","Data":"12f4d65c88131225625af81e7744e9c4b52f93570c9f5b93a6f7343342143cf5"} Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.514999 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44d8h" event={"ID":"ac9e7735-6d19-46f6-b387-147874311d92","Type":"ContainerDied","Data":"912b9f1c63cebc52037ce3e274027bd015bf4dddef5a244bd2f5beb29819769e"} Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.515020 4718 scope.go:117] "RemoveContainer" containerID="12f4d65c88131225625af81e7744e9c4b52f93570c9f5b93a6f7343342143cf5" Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.551806 4718 scope.go:117] "RemoveContainer" containerID="ccb90a14df93eb8b1b56d80748260a7282e985dc774346ee02b61f45a1702c56" Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.570932 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-44d8h"] Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.588495 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-44d8h"] Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.593767 4718 scope.go:117] "RemoveContainer" containerID="47b2995550788dfe9134ee8b596cae3271cc1ec79e164f04b6dd596e91aa11bb" Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.661809 4718 scope.go:117] "RemoveContainer" containerID="12f4d65c88131225625af81e7744e9c4b52f93570c9f5b93a6f7343342143cf5" Jan 23 17:52:52 crc kubenswrapper[4718]: E0123 17:52:52.667256 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12f4d65c88131225625af81e7744e9c4b52f93570c9f5b93a6f7343342143cf5\": container with ID starting with 12f4d65c88131225625af81e7744e9c4b52f93570c9f5b93a6f7343342143cf5 not found: ID does not exist" containerID="12f4d65c88131225625af81e7744e9c4b52f93570c9f5b93a6f7343342143cf5" Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.667290 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12f4d65c88131225625af81e7744e9c4b52f93570c9f5b93a6f7343342143cf5"} err="failed to get container status \"12f4d65c88131225625af81e7744e9c4b52f93570c9f5b93a6f7343342143cf5\": rpc error: code = NotFound desc = could not find container \"12f4d65c88131225625af81e7744e9c4b52f93570c9f5b93a6f7343342143cf5\": container with ID starting with 12f4d65c88131225625af81e7744e9c4b52f93570c9f5b93a6f7343342143cf5 not found: ID does not exist" Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.667312 4718 scope.go:117] "RemoveContainer" containerID="ccb90a14df93eb8b1b56d80748260a7282e985dc774346ee02b61f45a1702c56" Jan 23 17:52:52 crc kubenswrapper[4718]: E0123 17:52:52.667853 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccb90a14df93eb8b1b56d80748260a7282e985dc774346ee02b61f45a1702c56\": container with ID starting with ccb90a14df93eb8b1b56d80748260a7282e985dc774346ee02b61f45a1702c56 not found: ID does not exist" containerID="ccb90a14df93eb8b1b56d80748260a7282e985dc774346ee02b61f45a1702c56" Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.667874 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccb90a14df93eb8b1b56d80748260a7282e985dc774346ee02b61f45a1702c56"} err="failed to get container status \"ccb90a14df93eb8b1b56d80748260a7282e985dc774346ee02b61f45a1702c56\": rpc error: code = NotFound desc = could not find container \"ccb90a14df93eb8b1b56d80748260a7282e985dc774346ee02b61f45a1702c56\": container with ID starting with ccb90a14df93eb8b1b56d80748260a7282e985dc774346ee02b61f45a1702c56 not found: ID does not exist" Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.667886 4718 scope.go:117] "RemoveContainer" containerID="47b2995550788dfe9134ee8b596cae3271cc1ec79e164f04b6dd596e91aa11bb" Jan 23 17:52:52 crc kubenswrapper[4718]: E0123 17:52:52.675789 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47b2995550788dfe9134ee8b596cae3271cc1ec79e164f04b6dd596e91aa11bb\": container with ID starting with 47b2995550788dfe9134ee8b596cae3271cc1ec79e164f04b6dd596e91aa11bb not found: ID does not exist" containerID="47b2995550788dfe9134ee8b596cae3271cc1ec79e164f04b6dd596e91aa11bb" Jan 23 17:52:52 crc kubenswrapper[4718]: I0123 17:52:52.675817 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47b2995550788dfe9134ee8b596cae3271cc1ec79e164f04b6dd596e91aa11bb"} err="failed to get container status \"47b2995550788dfe9134ee8b596cae3271cc1ec79e164f04b6dd596e91aa11bb\": rpc error: code = NotFound desc = could not find container \"47b2995550788dfe9134ee8b596cae3271cc1ec79e164f04b6dd596e91aa11bb\": container with ID starting with 47b2995550788dfe9134ee8b596cae3271cc1ec79e164f04b6dd596e91aa11bb not found: ID does not exist" Jan 23 17:52:53 crc kubenswrapper[4718]: I0123 17:52:53.159228 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac9e7735-6d19-46f6-b387-147874311d92" path="/var/lib/kubelet/pods/ac9e7735-6d19-46f6-b387-147874311d92/volumes" Jan 23 17:52:58 crc kubenswrapper[4718]: I0123 17:52:58.875171 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:52:58 crc kubenswrapper[4718]: I0123 17:52:58.875666 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:53:28 crc kubenswrapper[4718]: I0123 17:53:28.875437 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:53:28 crc kubenswrapper[4718]: I0123 17:53:28.875963 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:53:28 crc kubenswrapper[4718]: I0123 17:53:28.876010 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 17:53:28 crc kubenswrapper[4718]: I0123 17:53:28.876677 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5fb94ce317661475574425319434743ab7a9ca05eb9dd4d117ff3e41f5b6647"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:53:28 crc kubenswrapper[4718]: I0123 17:53:28.876731 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://b5fb94ce317661475574425319434743ab7a9ca05eb9dd4d117ff3e41f5b6647" gracePeriod=600 Jan 23 17:53:29 crc kubenswrapper[4718]: I0123 17:53:29.947141 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="b5fb94ce317661475574425319434743ab7a9ca05eb9dd4d117ff3e41f5b6647" exitCode=0 Jan 23 17:53:29 crc kubenswrapper[4718]: I0123 17:53:29.947217 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"b5fb94ce317661475574425319434743ab7a9ca05eb9dd4d117ff3e41f5b6647"} Jan 23 17:53:29 crc kubenswrapper[4718]: I0123 17:53:29.947687 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5"} Jan 23 17:53:29 crc kubenswrapper[4718]: I0123 17:53:29.947709 4718 scope.go:117] "RemoveContainer" containerID="2eeb7c3d243f524f11835057afdda4966d05ce7968d02693ddb3069b728019df" Jan 23 17:54:19 crc kubenswrapper[4718]: I0123 17:54:19.243793 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jxsht"] Jan 23 17:54:19 crc kubenswrapper[4718]: E0123 17:54:19.244909 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac9e7735-6d19-46f6-b387-147874311d92" containerName="extract-content" Jan 23 17:54:19 crc kubenswrapper[4718]: I0123 17:54:19.244926 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac9e7735-6d19-46f6-b387-147874311d92" containerName="extract-content" Jan 23 17:54:19 crc kubenswrapper[4718]: E0123 17:54:19.244953 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac9e7735-6d19-46f6-b387-147874311d92" containerName="registry-server" Jan 23 17:54:19 crc kubenswrapper[4718]: I0123 17:54:19.244959 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac9e7735-6d19-46f6-b387-147874311d92" containerName="registry-server" Jan 23 17:54:19 crc kubenswrapper[4718]: E0123 17:54:19.244971 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac9e7735-6d19-46f6-b387-147874311d92" containerName="extract-utilities" Jan 23 17:54:19 crc kubenswrapper[4718]: I0123 17:54:19.244978 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac9e7735-6d19-46f6-b387-147874311d92" containerName="extract-utilities" Jan 23 17:54:19 crc kubenswrapper[4718]: I0123 17:54:19.245239 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac9e7735-6d19-46f6-b387-147874311d92" containerName="registry-server" Jan 23 17:54:19 crc kubenswrapper[4718]: I0123 17:54:19.246963 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxsht" Jan 23 17:54:19 crc kubenswrapper[4718]: I0123 17:54:19.257882 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jxsht"] Jan 23 17:54:19 crc kubenswrapper[4718]: I0123 17:54:19.377542 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0058f1d4-3fdf-4686-8307-b92b9a6bbd93-catalog-content\") pod \"community-operators-jxsht\" (UID: \"0058f1d4-3fdf-4686-8307-b92b9a6bbd93\") " pod="openshift-marketplace/community-operators-jxsht" Jan 23 17:54:19 crc kubenswrapper[4718]: I0123 17:54:19.377952 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0058f1d4-3fdf-4686-8307-b92b9a6bbd93-utilities\") pod \"community-operators-jxsht\" (UID: \"0058f1d4-3fdf-4686-8307-b92b9a6bbd93\") " pod="openshift-marketplace/community-operators-jxsht" Jan 23 17:54:19 crc kubenswrapper[4718]: I0123 17:54:19.378042 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvsm6\" (UniqueName: \"kubernetes.io/projected/0058f1d4-3fdf-4686-8307-b92b9a6bbd93-kube-api-access-mvsm6\") pod \"community-operators-jxsht\" (UID: \"0058f1d4-3fdf-4686-8307-b92b9a6bbd93\") " pod="openshift-marketplace/community-operators-jxsht" Jan 23 17:54:19 crc kubenswrapper[4718]: I0123 17:54:19.480706 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0058f1d4-3fdf-4686-8307-b92b9a6bbd93-catalog-content\") pod \"community-operators-jxsht\" (UID: \"0058f1d4-3fdf-4686-8307-b92b9a6bbd93\") " pod="openshift-marketplace/community-operators-jxsht" Jan 23 17:54:19 crc kubenswrapper[4718]: I0123 17:54:19.480798 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0058f1d4-3fdf-4686-8307-b92b9a6bbd93-utilities\") pod \"community-operators-jxsht\" (UID: \"0058f1d4-3fdf-4686-8307-b92b9a6bbd93\") " pod="openshift-marketplace/community-operators-jxsht" Jan 23 17:54:19 crc kubenswrapper[4718]: I0123 17:54:19.480834 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvsm6\" (UniqueName: \"kubernetes.io/projected/0058f1d4-3fdf-4686-8307-b92b9a6bbd93-kube-api-access-mvsm6\") pod \"community-operators-jxsht\" (UID: \"0058f1d4-3fdf-4686-8307-b92b9a6bbd93\") " pod="openshift-marketplace/community-operators-jxsht" Jan 23 17:54:19 crc kubenswrapper[4718]: I0123 17:54:19.481340 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0058f1d4-3fdf-4686-8307-b92b9a6bbd93-catalog-content\") pod \"community-operators-jxsht\" (UID: \"0058f1d4-3fdf-4686-8307-b92b9a6bbd93\") " pod="openshift-marketplace/community-operators-jxsht" Jan 23 17:54:19 crc kubenswrapper[4718]: I0123 17:54:19.481384 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0058f1d4-3fdf-4686-8307-b92b9a6bbd93-utilities\") pod \"community-operators-jxsht\" (UID: \"0058f1d4-3fdf-4686-8307-b92b9a6bbd93\") " pod="openshift-marketplace/community-operators-jxsht" Jan 23 17:54:19 crc kubenswrapper[4718]: I0123 17:54:19.502301 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvsm6\" (UniqueName: \"kubernetes.io/projected/0058f1d4-3fdf-4686-8307-b92b9a6bbd93-kube-api-access-mvsm6\") pod \"community-operators-jxsht\" (UID: \"0058f1d4-3fdf-4686-8307-b92b9a6bbd93\") " pod="openshift-marketplace/community-operators-jxsht" Jan 23 17:54:19 crc kubenswrapper[4718]: I0123 17:54:19.570821 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxsht" Jan 23 17:54:20 crc kubenswrapper[4718]: I0123 17:54:20.106728 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jxsht"] Jan 23 17:54:20 crc kubenswrapper[4718]: I0123 17:54:20.553865 4718 generic.go:334] "Generic (PLEG): container finished" podID="0058f1d4-3fdf-4686-8307-b92b9a6bbd93" containerID="1db2dfee6ac8e7d2d3092674f1f812717a1aac11098d2fa1b0db9b3a2f513a5e" exitCode=0 Jan 23 17:54:20 crc kubenswrapper[4718]: I0123 17:54:20.553933 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxsht" event={"ID":"0058f1d4-3fdf-4686-8307-b92b9a6bbd93","Type":"ContainerDied","Data":"1db2dfee6ac8e7d2d3092674f1f812717a1aac11098d2fa1b0db9b3a2f513a5e"} Jan 23 17:54:20 crc kubenswrapper[4718]: I0123 17:54:20.554396 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxsht" event={"ID":"0058f1d4-3fdf-4686-8307-b92b9a6bbd93","Type":"ContainerStarted","Data":"8a7685892b280a2b8eb331cfbcfff17dbb26c1ca25813fa57b5ecddb09817e06"} Jan 23 17:54:20 crc kubenswrapper[4718]: I0123 17:54:20.557317 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:54:22 crc kubenswrapper[4718]: I0123 17:54:22.584387 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxsht" event={"ID":"0058f1d4-3fdf-4686-8307-b92b9a6bbd93","Type":"ContainerStarted","Data":"915aec672a03ed3a5d8cc09151c4cf02330f0041209c06d41788c7db60af9022"} Jan 23 17:54:23 crc kubenswrapper[4718]: I0123 17:54:23.603338 4718 generic.go:334] "Generic (PLEG): container finished" podID="0058f1d4-3fdf-4686-8307-b92b9a6bbd93" containerID="915aec672a03ed3a5d8cc09151c4cf02330f0041209c06d41788c7db60af9022" exitCode=0 Jan 23 17:54:23 crc kubenswrapper[4718]: I0123 17:54:23.603492 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxsht" event={"ID":"0058f1d4-3fdf-4686-8307-b92b9a6bbd93","Type":"ContainerDied","Data":"915aec672a03ed3a5d8cc09151c4cf02330f0041209c06d41788c7db60af9022"} Jan 23 17:54:25 crc kubenswrapper[4718]: I0123 17:54:25.634356 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxsht" event={"ID":"0058f1d4-3fdf-4686-8307-b92b9a6bbd93","Type":"ContainerStarted","Data":"2a595dad4f2f3c79b268951221f6cd932f32a1395de37df5f1870fb5ab2e8198"} Jan 23 17:54:25 crc kubenswrapper[4718]: I0123 17:54:25.665430 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jxsht" podStartSLOduration=2.874150635 podStartE2EDuration="6.665398296s" podCreationTimestamp="2026-01-23 17:54:19 +0000 UTC" firstStartedPulling="2026-01-23 17:54:20.55696772 +0000 UTC m=+5861.704209711" lastFinishedPulling="2026-01-23 17:54:24.348215381 +0000 UTC m=+5865.495457372" observedRunningTime="2026-01-23 17:54:25.655393144 +0000 UTC m=+5866.802635135" watchObservedRunningTime="2026-01-23 17:54:25.665398296 +0000 UTC m=+5866.812640287" Jan 23 17:54:29 crc kubenswrapper[4718]: I0123 17:54:29.571427 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jxsht" Jan 23 17:54:29 crc kubenswrapper[4718]: I0123 17:54:29.573178 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jxsht" Jan 23 17:54:29 crc kubenswrapper[4718]: I0123 17:54:29.621054 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jxsht" Jan 23 17:54:40 crc kubenswrapper[4718]: I0123 17:54:40.185165 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jxsht" Jan 23 17:54:43 crc kubenswrapper[4718]: I0123 17:54:43.636114 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jxsht"] Jan 23 17:54:43 crc kubenswrapper[4718]: I0123 17:54:43.636683 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jxsht" podUID="0058f1d4-3fdf-4686-8307-b92b9a6bbd93" containerName="registry-server" containerID="cri-o://2a595dad4f2f3c79b268951221f6cd932f32a1395de37df5f1870fb5ab2e8198" gracePeriod=2 Jan 23 17:54:43 crc kubenswrapper[4718]: I0123 17:54:43.847117 4718 generic.go:334] "Generic (PLEG): container finished" podID="0058f1d4-3fdf-4686-8307-b92b9a6bbd93" containerID="2a595dad4f2f3c79b268951221f6cd932f32a1395de37df5f1870fb5ab2e8198" exitCode=0 Jan 23 17:54:43 crc kubenswrapper[4718]: I0123 17:54:43.847167 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxsht" event={"ID":"0058f1d4-3fdf-4686-8307-b92b9a6bbd93","Type":"ContainerDied","Data":"2a595dad4f2f3c79b268951221f6cd932f32a1395de37df5f1870fb5ab2e8198"} Jan 23 17:54:44 crc kubenswrapper[4718]: I0123 17:54:44.185733 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxsht" Jan 23 17:54:44 crc kubenswrapper[4718]: I0123 17:54:44.250565 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0058f1d4-3fdf-4686-8307-b92b9a6bbd93-utilities\") pod \"0058f1d4-3fdf-4686-8307-b92b9a6bbd93\" (UID: \"0058f1d4-3fdf-4686-8307-b92b9a6bbd93\") " Jan 23 17:54:44 crc kubenswrapper[4718]: I0123 17:54:44.250856 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0058f1d4-3fdf-4686-8307-b92b9a6bbd93-catalog-content\") pod \"0058f1d4-3fdf-4686-8307-b92b9a6bbd93\" (UID: \"0058f1d4-3fdf-4686-8307-b92b9a6bbd93\") " Jan 23 17:54:44 crc kubenswrapper[4718]: I0123 17:54:44.250917 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvsm6\" (UniqueName: \"kubernetes.io/projected/0058f1d4-3fdf-4686-8307-b92b9a6bbd93-kube-api-access-mvsm6\") pod \"0058f1d4-3fdf-4686-8307-b92b9a6bbd93\" (UID: \"0058f1d4-3fdf-4686-8307-b92b9a6bbd93\") " Jan 23 17:54:44 crc kubenswrapper[4718]: I0123 17:54:44.252384 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0058f1d4-3fdf-4686-8307-b92b9a6bbd93-utilities" (OuterVolumeSpecName: "utilities") pod "0058f1d4-3fdf-4686-8307-b92b9a6bbd93" (UID: "0058f1d4-3fdf-4686-8307-b92b9a6bbd93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:54:44 crc kubenswrapper[4718]: I0123 17:54:44.258904 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0058f1d4-3fdf-4686-8307-b92b9a6bbd93-kube-api-access-mvsm6" (OuterVolumeSpecName: "kube-api-access-mvsm6") pod "0058f1d4-3fdf-4686-8307-b92b9a6bbd93" (UID: "0058f1d4-3fdf-4686-8307-b92b9a6bbd93"). InnerVolumeSpecName "kube-api-access-mvsm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:54:44 crc kubenswrapper[4718]: I0123 17:54:44.320702 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0058f1d4-3fdf-4686-8307-b92b9a6bbd93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0058f1d4-3fdf-4686-8307-b92b9a6bbd93" (UID: "0058f1d4-3fdf-4686-8307-b92b9a6bbd93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:54:44 crc kubenswrapper[4718]: I0123 17:54:44.354253 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0058f1d4-3fdf-4686-8307-b92b9a6bbd93-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:54:44 crc kubenswrapper[4718]: I0123 17:54:44.354300 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvsm6\" (UniqueName: \"kubernetes.io/projected/0058f1d4-3fdf-4686-8307-b92b9a6bbd93-kube-api-access-mvsm6\") on node \"crc\" DevicePath \"\"" Jan 23 17:54:44 crc kubenswrapper[4718]: I0123 17:54:44.354313 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0058f1d4-3fdf-4686-8307-b92b9a6bbd93-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:54:44 crc kubenswrapper[4718]: I0123 17:54:44.862840 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxsht" event={"ID":"0058f1d4-3fdf-4686-8307-b92b9a6bbd93","Type":"ContainerDied","Data":"8a7685892b280a2b8eb331cfbcfff17dbb26c1ca25813fa57b5ecddb09817e06"} Jan 23 17:54:44 crc kubenswrapper[4718]: I0123 17:54:44.863412 4718 scope.go:117] "RemoveContainer" containerID="2a595dad4f2f3c79b268951221f6cd932f32a1395de37df5f1870fb5ab2e8198" Jan 23 17:54:44 crc kubenswrapper[4718]: I0123 17:54:44.863735 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxsht" Jan 23 17:54:44 crc kubenswrapper[4718]: I0123 17:54:44.905099 4718 scope.go:117] "RemoveContainer" containerID="915aec672a03ed3a5d8cc09151c4cf02330f0041209c06d41788c7db60af9022" Jan 23 17:54:44 crc kubenswrapper[4718]: I0123 17:54:44.917736 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jxsht"] Jan 23 17:54:44 crc kubenswrapper[4718]: I0123 17:54:44.931064 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jxsht"] Jan 23 17:54:44 crc kubenswrapper[4718]: I0123 17:54:44.943463 4718 scope.go:117] "RemoveContainer" containerID="1db2dfee6ac8e7d2d3092674f1f812717a1aac11098d2fa1b0db9b3a2f513a5e" Jan 23 17:54:45 crc kubenswrapper[4718]: I0123 17:54:45.156186 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0058f1d4-3fdf-4686-8307-b92b9a6bbd93" path="/var/lib/kubelet/pods/0058f1d4-3fdf-4686-8307-b92b9a6bbd93/volumes" Jan 23 17:55:02 crc kubenswrapper[4718]: I0123 17:55:02.546918 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="764a9a7e-61b2-4513-8f87-fc357857c90f" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 23 17:55:02 crc kubenswrapper[4718]: I0123 17:55:02.547764 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="764a9a7e-61b2-4513-8f87-fc357857c90f" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 23 17:55:59 crc kubenswrapper[4718]: I0123 17:55:59.209168 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="764a9a7e-61b2-4513-8f87-fc357857c90f" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 23 17:55:59 crc kubenswrapper[4718]: I0123 17:55:59.299572 4718 patch_prober.go:28] interesting pod/logging-loki-gateway-5f48ff8847-th7vf container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:55:59 crc kubenswrapper[4718]: I0123 17:55:59.299919 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" podUID="5aca942e-fa67-4679-a257-6db5cf93a95a" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:55:59 crc kubenswrapper[4718]: I0123 17:55:59.423534 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="2ec50566-57bf-4ddf-aa36-4dfe1fa36d07" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.1.7:6080/vnc_lite.html\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:55:59 crc kubenswrapper[4718]: I0123 17:55:59.436356 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="8ee3d2a0-3f10-40d9-980c-deb1bc35b613" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.19:8080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.449862 4718 patch_prober.go:28] interesting pod/logging-loki-gateway-5f48ff8847-c72c6 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.449919 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" podUID="46bec7ac-b95d-425d-ab7a-4a669278b158" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.450419 4718 patch_prober.go:28] interesting pod/logging-loki-gateway-5f48ff8847-c72c6 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.450438 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" podUID="46bec7ac-b95d-425d-ab7a-4a669278b158" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.477966 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5b4d87654d-p9q2p" podUID="d3d50a24-2b4e-43eb-ac1a-2807554f0989" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.210:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.481891 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5b6b78dc95-9ft97" podUID="6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.221:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.484241 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-br6fl" podUID="f99e5457-16fb-453f-909c-a8364ffc0372" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.489045 4718 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-g6hrd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.489446 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g6hrd" podUID="e179a011-4637-42d5-9679-e910440d25ac" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.490588 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5b6b78dc95-9ft97" podUID="6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.221:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.491516 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5b6b78dc95-9ft97" podUID="6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.221:8080/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.535169 4718 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.535212 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.535475 4718 patch_prober.go:28] interesting pod/thanos-querier-f7bdc8bf4-sj8vz container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.74:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.535493 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" podUID="b4fd2138-8b95-4e9b-992a-d368d2f2ea94" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.535666 4718 patch_prober.go:28] interesting pod/thanos-querier-f7bdc8bf4-sj8vz container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.535684 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" podUID="b4fd2138-8b95-4e9b-992a-d368d2f2ea94" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.561333 4718 patch_prober.go:28] interesting pod/apiserver-76f77b778f-qwldq container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:55:59.561410 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-qwldq" podUID="a29dbc90-997d-4f83-8151-1cfcca661070" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:00.532772 4718 patch_prober.go:28] interesting pod/metrics-server-67b49ccc4f-bxbv9 container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.75:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:00.533107 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" podUID="ba11b9fa-937a-42f1-9559-79397077a342" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.75:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:00.533157 4718 patch_prober.go:28] interesting pod/metrics-server-67b49ccc4f-bxbv9 container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.75:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:00.533172 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" podUID="ba11b9fa-937a-42f1-9559-79397077a342" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.75:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:07.061752 4718 patch_prober.go:28] interesting pod/metrics-server-67b49ccc4f-bxbv9 container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.75:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:07.062040 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" podUID="ba11b9fa-937a-42f1-9559-79397077a342" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.75:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:07.062082 4718 patch_prober.go:28] interesting pod/metrics-server-67b49ccc4f-bxbv9 container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.75:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:07.062098 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-67b49ccc4f-bxbv9" podUID="ba11b9fa-937a-42f1-9559-79397077a342" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.75:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.439553 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" podUID="18395392-bb8d-49be-9b49-950d6f32b9f6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": dial tcp 10.217.0.116:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.441474 4718 patch_prober.go:28] interesting pod/logging-loki-gateway-5f48ff8847-c72c6 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.441524 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" podUID="46bec7ac-b95d-425d-ab7a-4a669278b158" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.441863 4718 patch_prober.go:28] interesting pod/logging-loki-gateway-5f48ff8847-c72c6 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": context deadline exceeded" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.441880 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" podUID="46bec7ac-b95d-425d-ab7a-4a669278b158" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": context deadline exceeded" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.451458 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="8ee3d2a0-3f10-40d9-980c-deb1bc35b613" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.19:8080/livez\": context deadline exceeded" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.439544 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-pd8td" podUID="9e7b3c6e-a339-4412-aecf-1091bfc315a5" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.490650 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b4d87654d-p9q2p" podUID="d3d50a24-2b4e-43eb-ac1a-2807554f0989" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.210:9311/healthcheck\": dial tcp 10.217.0.210:9311: i/o timeout" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.490965 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4" podUID="addb55c8-8565-42c2-84d2-7ee7e8693a3a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.491012 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b4d87654d-p9q2p" podUID="d3d50a24-2b4e-43eb-ac1a-2807554f0989" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.210:9311/healthcheck\": dial tcp 10.217.0.210:9311: i/o timeout" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.491084 4718 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-hqbgx container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.87:9443/readyz\": dial tcp 10.217.0.87:9443: i/o timeout" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.491132 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hqbgx" podUID="9d41c1ee-b304-42c0-a2e7-2fe83315a430" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.87:9443/readyz\": dial tcp 10.217.0.87:9443: i/o timeout" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.496242 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 9.125219582s: [/var/lib/containers/storage/overlay/0476f9c4a59339bc648eed9f495f7e5a101f5824cbc93891af21b4467c1e0223/diff /var/log/pods/openstack_barbican-api-5b4d87654d-p9q2p_d3d50a24-2b4e-43eb-ac1a-2807554f0989/barbican-api/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.496676 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 9.083888279s: [/var/lib/containers/storage/overlay/1c8fd9a0664cd0ea66ba03423115280dfffe59eab9f5321c92d660c5f88895fb/diff /var/log/pods/openshift-monitoring_prometheus-k8s-0_5a8875e9-a37e-4b44-a63d-88cbbd2aaefa/prometheus/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.499869 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 9.12328582s: [/var/lib/containers/storage/overlay/917ff4b2900e1bfde4eb9e5860dad508629c0475ea790eef703ebf6508bd93d1/diff /var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.508047 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 9.140409165s: [/var/lib/containers/storage/overlay/cf85095d53c02273f7624f34dee2876eb26c87c27a958a1f196234937cf2bdf8/diff /var/log/pods/openstack_heat-engine-6b6465d99d-xv658_6bb4cb4d-9614-4570-a061-73f87bc9a159/heat-engine/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.548573 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-pd8td" podUID="9e7b3c6e-a339-4412-aecf-1091bfc315a5" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.548583 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-rfc5b" podUID="b47f2ba5-694f-4929-9932-a844b35ba149" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.550310 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc" podUID="858bcd70-b537-4da9-8ca9-27c1724ece99" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.550625 4718 patch_prober.go:28] interesting pod/logging-loki-gateway-5f48ff8847-th7vf container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.53:8081/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.550684 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" podUID="5aca942e-fa67-4679-a257-6db5cf93a95a" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.554171 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.554232 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.555779 4718 patch_prober.go:28] interesting pod/logging-loki-gateway-5f48ff8847-th7vf container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.53:8083/live\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.555827 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" podUID="5aca942e-fa67-4679-a257-6db5cf93a95a" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/live\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.555884 4718 patch_prober.go:28] interesting pod/logging-loki-gateway-5f48ff8847-c72c6 container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:8081/live\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.555900 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" podUID="46bec7ac-b95d-425d-ab7a-4a669278b158" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/live\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.555934 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="9fdab71e-08c8-4269-a9dd-69b152751e4d" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.233:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.556173 4718 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-ss8qc container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.556202 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-ss8qc" podUID="5ededf03-fe00-4583-b2ee-ef2a3f301f79" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.556390 4718 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-5cw9h container/loki-query-frontend namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.556417 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-5cw9h" podUID="96cdf9bc-4893-4918-94e9-a23212e8ec5c" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.560751 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2kdmh" podUID="0a04951a-b116-4c6f-ad48-4742051ef181" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.561024 4718 patch_prober.go:28] interesting pod/thanos-querier-f7bdc8bf4-sj8vz container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.74:9091/-/healthy\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.561125 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-f7bdc8bf4-sj8vz" podUID="b4fd2138-8b95-4e9b-992a-d368d2f2ea94" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/healthy\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.562156 4718 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-vkwhk container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.69:5000/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.570463 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" podUID="28e242fd-d251-4f42-8db6-44948d62ad87" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.69:5000/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.563200 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx" podUID="ae7c1f40-90dd-441b-9dc5-608e1a503f4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.563162 4718 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-vkwhk container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.69:5000/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.570978 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-vkwhk" podUID="28e242fd-d251-4f42-8db6-44948d62ad87" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.69:5000/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.572183 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2" podUID="d91ca0c9-05fb-4e8b-9581-f2c3d025c0e2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.573058 4718 patch_prober.go:28] interesting pod/logging-loki-gateway-5f48ff8847-th7vf container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.573097 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" podUID="5aca942e-fa67-4679-a257-6db5cf93a95a" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.604810 4718 patch_prober.go:28] interesting pod/controller-manager-dcd9cb8d6-gtl5p container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.604863 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-dcd9cb8d6-gtl5p" podUID="0aefccb3-bec0-40ca-bd49-5d18b5df30fa" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.605431 4718 patch_prober.go:28] interesting pod/router-default-5444994796-hzrpf container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.605481 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-hzrpf" podUID="6e51ecf0-a72a-461c-a669-8bce49b39003" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.607010 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" podUID="18395392-bb8d-49be-9b49-950d6f32b9f6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.609378 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2" podUID="d91ca0c9-05fb-4e8b-9581-f2c3d025c0e2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.611982 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8" podUID="2062a379-6201-4835-8974-24befcfbf8e0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.612416 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sr4hx" podUID="8e29e3d6-21d7-4a1a-832e-f831d884fd00" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.618978 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-rfc5b" podUID="b47f2ba5-694f-4929-9932-a844b35ba149" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.684370 4718 patch_prober.go:28] interesting pod/logging-loki-gateway-5f48ff8847-th7vf container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.689619 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="8ee3d2a0-3f10-40d9-980c-deb1bc35b613" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.19:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.689762 4718 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.694220 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.691860 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" podUID="5aca942e-fa67-4679-a257-6db5cf93a95a" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.700720 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9a386a29-4af1-4f01-ac73-771210f5a97f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.176:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.700762 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="9a386a29-4af1-4f01-ac73-771210f5a97f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.176:9090/-/healthy\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.700779 4718 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.704107 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="cdb096c7-cef2-48a8-9f83-4752311a02be" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.701914 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" podUID="a2118990-95a4-4a61-8c6a-3a72bdea8642" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.40:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.701880 4718 patch_prober.go:28] interesting pod/oauth-openshift-7b964c775c-ct48x container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.704517 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" podUID="65a1ab25-cdad-4128-a946-070231fd85fb" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.701964 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="82c5d1a7-2493-4399-9a20-247f71a1c754" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.15:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.701896 4718 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.63:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.705924 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="014d7cb2-435f-4a6f-85af-6bc6553d6704" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.63:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.701929 4718 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.62:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.706020 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="c3a642bb-f3f3-4e14-9442-0aa47e1b7b43" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.62:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.702056 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-f6b6f5fd7-qpbqz" podUID="e836bdf5-8379-4f60-8dbe-7be5381ed922" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.206:5000/v3\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.702043 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/keystone-f6b6f5fd7-qpbqz" podUID="e836bdf5-8379-4f60-8dbe-7be5381ed922" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.206:5000/v3\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.701940 4718 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-f6czp container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.706289 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" podUID="0c5d01e6-e826-4f29-9160-ab28e19020b9" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.702005 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="f1c0f246-5016-4f2f-94a8-5805981faffc" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.220:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.702190 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5b4d87654d-p9q2p" podUID="d3d50a24-2b4e-43eb-ac1a-2807554f0989" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.210:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.701978 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="82c5d1a7-2493-4399-9a20-247f71a1c754" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.15:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.702210 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5b6b78dc95-9ft97" podUID="6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.221:8080/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.701992 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="9fdab71e-08c8-4269-a9dd-69b152751e4d" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.233:8776/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.702030 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" podUID="369053b2-11b0-4e19-a77d-3ea9cf595039" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.701952 4718 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-f6czp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.706886 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f6czp" podUID="0c5d01e6-e826-4f29-9160-ab28e19020b9" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.702257 4718 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.706911 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.702017 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" podUID="369053b2-11b0-4e19-a77d-3ea9cf595039" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.702268 4718 patch_prober.go:28] interesting pod/dns-default-n58ck container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.39:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.707056 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-n58ck" podUID="bbd71c65-e596-40fe-8f5a-86d849c44b24" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.39:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.702278 4718 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Liveness probe status=failure output="Get \"https://192.168.126.11:9980/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.707281 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="Get \"https://192.168.126.11:9980/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.702290 4718 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.710831 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 9.371230772s: [/var/lib/containers/storage/overlay/ffb8db557571171e77fea6189815276b069f23ffbb35ced3d73fe8daba4e1894/diff /var/log/pods/openshift-nmstate_nmstate-handler-hbmxh_c66f413f-8a00-4526-b93f-4d739aec140c/nmstate-handler/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.710876 4718 patch_prober.go:28] interesting pod/console-operator-58897d9998-5p7bb container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.710902 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-5p7bb" podUID="045e9b38-2543-4b98-919f-55227f2094a9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.702299 4718 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.719440 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.702309 4718 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.719766 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.702319 4718 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.719913 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.735903 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 9.396190331s: [/var/lib/containers/storage/overlay/3f1780e4d31a2f4c142f516368e95fbbee62c050bb9e1d6b8408baa9ab247115/diff /var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-4r42t_c9ada4d9-34eb-43fb-a0ba-09b879eab797/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.783731 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.835414 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 9.490884706s: [/var/lib/containers/storage/overlay/30e9da488086a21e69ecc898d566137e7e406d822d4c5eeea125495c6b9f6475/diff /var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-kfsc2_c015fcec-cc64-4d3c-bddd-df7d887d0ea3/248943ffaaf891dcd1625cc95618042963001633bc66baf8e5e55890df593e31.log]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.847733 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 9.503538011s: [/var/lib/containers/storage/overlay/4b158f4e5578eb2df248ed81c3f08b0f0590e3c84a145d91cecf17b9f5a63ab0/diff ]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.847799 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 9.503275334s: [/var/lib/containers/storage/overlay/932b31294f86cfe842b1d14d6d22b87280ba5a4c85d05e4bff80e9a44a89ebf6/diff /var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-64ktf_235aadec-9416-469c-8455-64dd1bc82a08/operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.848148 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 9.508443713s: [/var/lib/containers/storage/overlay/5707a10856add9758c9bfdfc4ae4c41b32f6698a488f98f524bbb51fe622bb04/diff ]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.857016 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 9.512866674s: [/var/lib/containers/storage/overlay/7e19c175a06c3629f0c65990055c1140c77cd0128caa35329833723e3de807b6/diff /var/log/pods/openstack_ovsdbserver-nb-0_cf53eabe-609c-471c-ae7e-ca9fb950f86e/openstack-network-exporter/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.880618 4718 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": unexpected EOF" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.880845 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": unexpected EOF" Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.889617 4718 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": write tcp 192.168.126.11:41162->192.168.126.11:10257: write: connection reset by peer" start-of-body= Jan 23 17:56:08 crc kubenswrapper[4718]: I0123 17:56:08.889769 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": write tcp 192.168.126.11:41162->192.168.126.11:10257: write: connection reset by peer" Jan 23 17:56:09 crc kubenswrapper[4718]: I0123 17:56:09.064460 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 9.716041639s: [/var/lib/containers/storage/overlay/c18cc283833dbeb0599ef1a0b5d6706eba2d7ffb75a765520abd1a92eb54b4c4/diff /var/log/pods/openstack_neutron-8567b78dd5-chd6w_5c367121-318c-413c-96e5-f53a105d91d3/neutron-api/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:09 crc kubenswrapper[4718]: I0123 17:56:09.089403 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 9.740857593s: [/var/lib/containers/storage/overlay/0ba16bb013ab3e88311f501ceea95671a345b5712cd71da511c7a43c1d10b13b/diff ]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:09 crc kubenswrapper[4718]: I0123 17:56:09.089827 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 9.741286655s: [/var/lib/containers/storage/overlay/b31066313e822bfae3cdb49056b6249982aa1a5b9e2a0c1e596ca64885330ecd/diff ]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:09 crc kubenswrapper[4718]: I0123 17:56:09.204227 4718 trace.go:236] Trace[531852652]: "iptables ChainExists" (23-Jan-2026 17:55:59.348) (total time: 9761ms): Jan 23 17:56:09 crc kubenswrapper[4718]: Trace[531852652]: [9.7613094s] [9.7613094s] END Jan 23 17:56:09 crc kubenswrapper[4718]: E0123 17:56:09.222203 4718 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:09 crc kubenswrapper[4718]: I0123 17:56:09.311753 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/dnsmasq-dns-5d75f767dc-4pkvj" podUID="69dc82c8-1e85-459e-9580-cbc33c567be5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.29:5353: i/o timeout" Jan 23 17:56:09 crc kubenswrapper[4718]: I0123 17:56:09.336895 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 9.988349363s: [/var/lib/containers/storage/overlay/604757f041ee21cd001fd9944856b6b65cbb399ea8ce489125257285415e9bb5/diff ]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:09 crc kubenswrapper[4718]: I0123 17:56:09.384457 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 10.035932788s: [/var/lib/containers/storage/overlay/e60709208f906508ac59af895a5fc7ea0105f6ef323a42198e197b366d3c906b/diff /var/log/pods/openstack_cinder-scheduler-0_f1c0f246-5016-4f2f-94a8-5805981faffc/probe/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:09 crc kubenswrapper[4718]: I0123 17:56:09.462292 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 10.113790334s: [/var/lib/containers/storage/overlay/aea7c6ad407ea1138437d6d591379af08ec968d5279113edc4b17197a3a711e7/diff /var/log/pods/openstack_mysqld-exporter-0_fa116646-6ee2-42f2-8a0f-56459516d495/mysqld-exporter/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:09 crc kubenswrapper[4718]: I0123 17:56:09.465606 4718 patch_prober.go:28] interesting pod/logging-loki-gateway-5f48ff8847-c72c6 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:09 crc kubenswrapper[4718]: I0123 17:56:09.473784 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" podUID="46bec7ac-b95d-425d-ab7a-4a669278b158" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:09 crc kubenswrapper[4718]: I0123 17:56:09.477236 4718 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 17:56:09 crc kubenswrapper[4718]: I0123 17:56:09.477344 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 17:56:09 crc kubenswrapper[4718]: I0123 17:56:09.501406 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 10.152916939s: [/var/lib/containers/storage/overlay/ba5ea5e56fae2e8c79e1b4d96aefbcbcce30ea55da6f3ec7e5e646dc41d1cc9c/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cs66r_6d67582c-e13c-49b0-ba38-842183da7019/ovnkube-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:09 crc kubenswrapper[4718]: I0123 17:56:09.504742 4718 trace.go:236] Trace[1856057995]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-2" (23-Jan-2026 17:56:08.455) (total time: 1049ms): Jan 23 17:56:09 crc kubenswrapper[4718]: Trace[1856057995]: [1.049132929s] [1.049132929s] END Jan 23 17:56:09 crc kubenswrapper[4718]: I0123 17:56:09.501851 4718 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 10.153379631s: [/var/lib/containers/storage/overlay/dfd233d33225cc0fdc4b06b863f58f41c8529e731911f408307d8d9f14ec36ba/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cs66r_6d67582c-e13c-49b0-ba38-842183da7019/sbdb/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 17:56:09 crc kubenswrapper[4718]: I0123 17:56:09.660847 4718 patch_prober.go:28] interesting pod/logging-loki-gateway-5f48ff8847-th7vf container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:09 crc kubenswrapper[4718]: I0123 17:56:09.660921 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" podUID="5aca942e-fa67-4679-a257-6db5cf93a95a" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:09 crc kubenswrapper[4718]: I0123 17:56:09.661830 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="91543550-f764-468a-a1e1-980e3d08aa41" containerName="galera" probeResult="failure" output="command timed out" Jan 23 17:56:09 crc kubenswrapper[4718]: E0123 17:56:09.670600 4718 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"crc\": the object has been modified; please apply your changes to the latest version and try again" Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:09.719036 4718 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:09.719087 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:09.748123 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" podUID="7eb6e283-9137-4b68-88b1-9a9dccb9fcd5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8080/readyz\": dial tcp 10.217.0.94:8080: connect: connection refused" Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:09.752839 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="91543550-f764-468a-a1e1-980e3d08aa41" containerName="galera" probeResult="failure" output="command timed out" Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:09.753298 4718 trace.go:236] Trace[2014621988]: "Calculate volume metrics of storage for pod minio-dev/minio" (23-Jan-2026 17:56:08.548) (total time: 1204ms): Jan 23 17:56:10 crc kubenswrapper[4718]: Trace[2014621988]: [1.204937836s] [1.204937836s] END Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:09.767518 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk" podUID="8d9099e2-7f4f-42d8-8e76-d2d8347a1514" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": dial tcp 10.217.0.104:8081: connect: connection refused" Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:09.769272 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc" podUID="858bcd70-b537-4da9-8ca9-27c1724ece99" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": dial tcp 10.217.0.102:8081: connect: connection refused" Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:09.769749 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" podUID="32d58a3a-df31-492e-a2c2-2f5ca31c5f90" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": dial tcp 10.217.0.111:8081: connect: connection refused" Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:09.770541 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" podUID="7eb6e283-9137-4b68-88b1-9a9dccb9fcd5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8080/readyz\": dial tcp 10.217.0.94:8080: connect: connection refused" Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:09.788978 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc" podUID="858bcd70-b537-4da9-8ca9-27c1724ece99" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": dial tcp 10.217.0.102:8081: connect: connection refused" Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:09.789184 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk" podUID="8d9099e2-7f4f-42d8-8e76-d2d8347a1514" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": dial tcp 10.217.0.104:8081: connect: connection refused" Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:09.789679 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" podUID="06df7a47-9233-4957-936e-27f58aeb0000" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": dial tcp 10.217.0.109:8081: connect: connection refused" Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:09.795297 4718 trace.go:236] Trace[1229909528]: "iptables ChainExists" (23-Jan-2026 17:55:59.371) (total time: 10423ms): Jan 23 17:56:10 crc kubenswrapper[4718]: Trace[1229909528]: [10.423713993s] [10.423713993s] END Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:09.816793 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" podUID="32d58a3a-df31-492e-a2c2-2f5ca31c5f90" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": dial tcp 10.217.0.111:8081: connect: connection refused" Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:09.828416 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk" podUID="8d9099e2-7f4f-42d8-8e76-d2d8347a1514" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": dial tcp 10.217.0.104:8081: connect: connection refused" Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:09.849294 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk" podUID="8d9099e2-7f4f-42d8-8e76-d2d8347a1514" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": dial tcp 10.217.0.104:8081: connect: connection refused" Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:09.936395 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:10.130898 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 17:56:10 crc kubenswrapper[4718]: I0123 17:56:10.130962 4718 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="92dc1daaf0c1e5c1b44393e50cca2a13f185b5ad31afcc5133c6aaf29a5223cc" exitCode=1 Jan 23 17:56:11 crc kubenswrapper[4718]: I0123 17:56:10.449755 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" podUID="369053b2-11b0-4e19-a77d-3ea9cf595039" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": dial tcp 10.217.0.123:8081: connect: connection refused" Jan 23 17:56:11 crc kubenswrapper[4718]: I0123 17:56:10.737758 4718 patch_prober.go:28] interesting pod/logging-loki-gateway-5f48ff8847-c72c6 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:11 crc kubenswrapper[4718]: I0123 17:56:10.737828 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f48ff8847-c72c6" podUID="46bec7ac-b95d-425d-ab7a-4a669278b158" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:11 crc kubenswrapper[4718]: I0123 17:56:10.757527 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="592a76d3-742f-47a0-9054-309fb2670fa3" containerName="galera" probeResult="failure" output="command timed out" Jan 23 17:56:11 crc kubenswrapper[4718]: I0123 17:56:10.757706 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="592a76d3-742f-47a0-9054-309fb2670fa3" containerName="galera" probeResult="failure" output="command timed out" Jan 23 17:56:11 crc kubenswrapper[4718]: I0123 17:56:10.814610 4718 patch_prober.go:28] interesting pod/logging-loki-gateway-5f48ff8847-th7vf container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:56:11 crc kubenswrapper[4718]: I0123 17:56:10.814676 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f48ff8847-th7vf" podUID="5aca942e-fa67-4679-a257-6db5cf93a95a" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:56:11 crc kubenswrapper[4718]: I0123 17:56:11.145594 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"92dc1daaf0c1e5c1b44393e50cca2a13f185b5ad31afcc5133c6aaf29a5223cc"} Jan 23 17:56:11 crc kubenswrapper[4718]: I0123 17:56:11.183082 4718 scope.go:117] "RemoveContainer" containerID="08f233d3095860213d06d53307d82f0aa78d43b0237ef6fe129ddc3c8521a329" Jan 23 17:56:11 crc kubenswrapper[4718]: E0123 17:56:11.316163 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49cc2143_a384_436e_8eef_4d7474918177.slice/crio-b77968f188bc7982aa26ee5dd5ac9b0479c1eba90dd5efc5859eddf74bb5c2bc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e29e3d6_21d7_4a1a_832e_f831d884fd00.slice/crio-537d4660efcb30fa9d53ae34e801a4e9fdccb49606e222e8c5b93e2a8bf0f474.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cfce3f5_1f59_43ae_aa99_2483cfb33806.slice/crio-conmon-311cdf16450137a92bc349afc901ebfca385b8b41d003779b30f690116ed5cf3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16e17ade_97be_48d4_83d4_7ac385174edd.slice/crio-conmon-ba53d3fb91ce798c9a2db16e8460e2ee5bfc436dde38d6ec5d466009242a0b14.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d9099e2_7f4f_42d8_8e76_d2d8347a1514.slice/crio-19e77281eb21db27f677af1cf4b5b4ba0ddd4a6dae9ae9a5b0d8746199450665.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaddb55c8_8565_42c2_84d2_7ee7e8693a3a.slice/crio-a2a67946d259ad0eb4d1623bd377351a6d24bc09af85dfb9bf1d9e9a08ceafcf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae7c1f40_90dd_441b_9dc5_608e1a503f4c.slice/crio-conmon-fa7fd7a29c5835685daf0e113f2eb1030f559f83dd34c082e56c081cf0ed0eae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-conmon-6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a95eff5_116c_4141_bee6_5bda12f21e11.slice/crio-09f3c77a6842a9fecbcf17ca6fc40bbd6c667f091d8b041a12cdba71442c5434.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e8950bc_8213_40eb_9bb7_2e1a8c66b57b.slice/crio-conmon-4061d1849c7eff9eb545852440accf3fa7fe07d4c1864e155b32f0b4c051a4fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod858bcd70_b537_4da9_8ca9_27c1724ece99.slice/crio-conmon-b33c5929a0b3781f558c3cb8178dc1ca9f4e5ad1d5de54f83fc31428193aed74.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2062a379_6201_4835_8974_24befcfbf8e0.slice/crio-9947294c74a66e9711cc2fefff6cd84b01e8f038de836477523ee076630beb0e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18395392_bb8d_49be_9b49_950d6f32b9f6.slice/crio-conmon-a366ca497ff23eae3b0d9d70e9f3336d685313197f5ccd1b5413aceedd2072c9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50178034_67cf_4f8d_89bb_788c8a73a72a.slice/crio-9c2b9bb1d06bbc3fc3a37f8227133317586436fc3ef5a17821fdb361171c39c3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2fe0ff3_bfa2_4cc4_b85c_8bc89ca73078.slice/crio-7b9e89600cb9690538cf5b0ab4101d4d17d5d6e808963977d62639be9b3db8db.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e8950bc_8213_40eb_9bb7_2e1a8c66b57b.slice/crio-4061d1849c7eff9eb545852440accf3fa7fe07d4c1864e155b32f0b4c051a4fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32d58a3a_df31_492e_a2c2_2f5ca31c5f90.slice/crio-6e89d73e544a86660f5990f301ce265ea87879dc7169ad94c57fbf1c78cb7c94.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-92dc1daaf0c1e5c1b44393e50cca2a13f185b5ad31afcc5133c6aaf29a5223cc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod235aadec_9416_469c_8455_64dd1bc82a08.slice/crio-d1ac29f31878ae79ebec31ec143c9d5c92dc31d4c2e0ea17addcaa62966fe5ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6012879_2e20_485d_829f_3a9ec3e5bcb1.slice/crio-786010b0c726ca3176c7fb8a9778406474ba04a21b372e4986ad91eca11d603d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod369053b2_11b0_4e19_a77d_3ea9cf595039.slice/crio-conmon-743ff81f2c888596c4886d9773a334626ae90653aaba5ec40bae715477ec071c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-conmon-92dc1daaf0c1e5c1b44393e50cca2a13f185b5ad31afcc5133c6aaf29a5223cc.scope\": RecentStats: unable to find data in memory cache]" Jan 23 17:56:11 crc kubenswrapper[4718]: E0123 17:56:11.319852 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod369053b2_11b0_4e19_a77d_3ea9cf595039.slice/crio-743ff81f2c888596c4886d9773a334626ae90653aaba5ec40bae715477ec071c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e8950bc_8213_40eb_9bb7_2e1a8c66b57b.slice/crio-conmon-4061d1849c7eff9eb545852440accf3fa7fe07d4c1864e155b32f0b4c051a4fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-92dc1daaf0c1e5c1b44393e50cca2a13f185b5ad31afcc5133c6aaf29a5223cc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d9099e2_7f4f_42d8_8e76_d2d8347a1514.slice/crio-19e77281eb21db27f677af1cf4b5b4ba0ddd4a6dae9ae9a5b0d8746199450665.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50178034_67cf_4f8d_89bb_788c8a73a72a.slice/crio-9c2b9bb1d06bbc3fc3a37f8227133317586436fc3ef5a17821fdb361171c39c3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-conmon-92dc1daaf0c1e5c1b44393e50cca2a13f185b5ad31afcc5133c6aaf29a5223cc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd91ca0c9_05fb_4e8b_9581_f2c3d025c0e2.slice/crio-conmon-9f573ab20f54561646018560348f723cad8785c5f12cb55728acbb2a9199212a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a95eff5_116c_4141_bee6_5bda12f21e11.slice/crio-09f3c77a6842a9fecbcf17ca6fc40bbd6c667f091d8b041a12cdba71442c5434.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18395392_bb8d_49be_9b49_950d6f32b9f6.slice/crio-conmon-a366ca497ff23eae3b0d9d70e9f3336d685313197f5ccd1b5413aceedd2072c9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaddb55c8_8565_42c2_84d2_7ee7e8693a3a.slice/crio-a2a67946d259ad0eb4d1623bd377351a6d24bc09af85dfb9bf1d9e9a08ceafcf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32d58a3a_df31_492e_a2c2_2f5ca31c5f90.slice/crio-6e89d73e544a86660f5990f301ce265ea87879dc7169ad94c57fbf1c78cb7c94.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2062a379_6201_4835_8974_24befcfbf8e0.slice/crio-9947294c74a66e9711cc2fefff6cd84b01e8f038de836477523ee076630beb0e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e29e3d6_21d7_4a1a_832e_f831d884fd00.slice/crio-537d4660efcb30fa9d53ae34e801a4e9fdccb49606e222e8c5b93e2a8bf0f474.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cfce3f5_1f59_43ae_aa99_2483cfb33806.slice/crio-conmon-311cdf16450137a92bc349afc901ebfca385b8b41d003779b30f690116ed5cf3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae7c1f40_90dd_441b_9dc5_608e1a503f4c.slice/crio-conmon-fa7fd7a29c5835685daf0e113f2eb1030f559f83dd34c082e56c081cf0ed0eae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod369053b2_11b0_4e19_a77d_3ea9cf595039.slice/crio-conmon-743ff81f2c888596c4886d9773a334626ae90653aaba5ec40bae715477ec071c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eb6e283_9137_4b68_88b1_9a9dccb9fcd5.slice/crio-933e811a6a4ca3cc9e0fb12d35f4203ca18ba4e935df79f1eeee3451b7dfd55d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod235aadec_9416_469c_8455_64dd1bc82a08.slice/crio-d1ac29f31878ae79ebec31ec143c9d5c92dc31d4c2e0ea17addcaa62966fe5ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2fe0ff3_bfa2_4cc4_b85c_8bc89ca73078.slice/crio-7b9e89600cb9690538cf5b0ab4101d4d17d5d6e808963977d62639be9b3db8db.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16e17ade_97be_48d4_83d4_7ac385174edd.slice/crio-conmon-ba53d3fb91ce798c9a2db16e8460e2ee5bfc436dde38d6ec5d466009242a0b14.scope\": RecentStats: unable to find data in memory cache]" Jan 23 17:56:11 crc kubenswrapper[4718]: E0123 17:56:11.333149 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd91ca0c9_05fb_4e8b_9581_f2c3d025c0e2.slice/crio-conmon-9f573ab20f54561646018560348f723cad8785c5f12cb55728acbb2a9199212a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod369053b2_11b0_4e19_a77d_3ea9cf595039.slice/crio-743ff81f2c888596c4886d9773a334626ae90653aaba5ec40bae715477ec071c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod369053b2_11b0_4e19_a77d_3ea9cf595039.slice/crio-conmon-743ff81f2c888596c4886d9773a334626ae90653aaba5ec40bae715477ec071c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae7c1f40_90dd_441b_9dc5_608e1a503f4c.slice/crio-conmon-fa7fd7a29c5835685daf0e113f2eb1030f559f83dd34c082e56c081cf0ed0eae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cfce3f5_1f59_43ae_aa99_2483cfb33806.slice/crio-conmon-311cdf16450137a92bc349afc901ebfca385b8b41d003779b30f690116ed5cf3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18395392_bb8d_49be_9b49_950d6f32b9f6.slice/crio-conmon-a366ca497ff23eae3b0d9d70e9f3336d685313197f5ccd1b5413aceedd2072c9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16e17ade_97be_48d4_83d4_7ac385174edd.slice/crio-conmon-ba53d3fb91ce798c9a2db16e8460e2ee5bfc436dde38d6ec5d466009242a0b14.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod858bcd70_b537_4da9_8ca9_27c1724ece99.slice/crio-conmon-b33c5929a0b3781f558c3cb8178dc1ca9f4e5ad1d5de54f83fc31428193aed74.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-conmon-6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc.scope\": RecentStats: unable to find data in memory cache]" Jan 23 17:56:11 crc kubenswrapper[4718]: E0123 17:56:11.337711 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a95eff5_116c_4141_bee6_5bda12f21e11.slice/crio-09f3c77a6842a9fecbcf17ca6fc40bbd6c667f091d8b041a12cdba71442c5434.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd91ca0c9_05fb_4e8b_9581_f2c3d025c0e2.slice/crio-conmon-9f573ab20f54561646018560348f723cad8785c5f12cb55728acbb2a9199212a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e8950bc_8213_40eb_9bb7_2e1a8c66b57b.slice/crio-4061d1849c7eff9eb545852440accf3fa7fe07d4c1864e155b32f0b4c051a4fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18395392_bb8d_49be_9b49_950d6f32b9f6.slice/crio-conmon-a366ca497ff23eae3b0d9d70e9f3336d685313197f5ccd1b5413aceedd2072c9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32d58a3a_df31_492e_a2c2_2f5ca31c5f90.slice/crio-6e89d73e544a86660f5990f301ce265ea87879dc7169ad94c57fbf1c78cb7c94.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16e17ade_97be_48d4_83d4_7ac385174edd.slice/crio-conmon-ba53d3fb91ce798c9a2db16e8460e2ee5bfc436dde38d6ec5d466009242a0b14.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50178034_67cf_4f8d_89bb_788c8a73a72a.slice/crio-9c2b9bb1d06bbc3fc3a37f8227133317586436fc3ef5a17821fdb361171c39c3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod369053b2_11b0_4e19_a77d_3ea9cf595039.slice/crio-743ff81f2c888596c4886d9773a334626ae90653aaba5ec40bae715477ec071c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eb6e283_9137_4b68_88b1_9a9dccb9fcd5.slice/crio-933e811a6a4ca3cc9e0fb12d35f4203ca18ba4e935df79f1eeee3451b7dfd55d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-conmon-92dc1daaf0c1e5c1b44393e50cca2a13f185b5ad31afcc5133c6aaf29a5223cc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod235aadec_9416_469c_8455_64dd1bc82a08.slice/crio-d1ac29f31878ae79ebec31ec143c9d5c92dc31d4c2e0ea17addcaa62966fe5ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2fe0ff3_bfa2_4cc4_b85c_8bc89ca73078.slice/crio-7b9e89600cb9690538cf5b0ab4101d4d17d5d6e808963977d62639be9b3db8db.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod858bcd70_b537_4da9_8ca9_27c1724ece99.slice/crio-conmon-b33c5929a0b3781f558c3cb8178dc1ca9f4e5ad1d5de54f83fc31428193aed74.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6012879_2e20_485d_829f_3a9ec3e5bcb1.slice/crio-786010b0c726ca3176c7fb8a9778406474ba04a21b372e4986ad91eca11d603d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49cc2143_a384_436e_8eef_4d7474918177.slice/crio-b77968f188bc7982aa26ee5dd5ac9b0479c1eba90dd5efc5859eddf74bb5c2bc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaddb55c8_8565_42c2_84d2_7ee7e8693a3a.slice/crio-a2a67946d259ad0eb4d1623bd377351a6d24bc09af85dfb9bf1d9e9a08ceafcf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2062a379_6201_4835_8974_24befcfbf8e0.slice/crio-9947294c74a66e9711cc2fefff6cd84b01e8f038de836477523ee076630beb0e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e8950bc_8213_40eb_9bb7_2e1a8c66b57b.slice/crio-conmon-4061d1849c7eff9eb545852440accf3fa7fe07d4c1864e155b32f0b4c051a4fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae7c1f40_90dd_441b_9dc5_608e1a503f4c.slice/crio-conmon-fa7fd7a29c5835685daf0e113f2eb1030f559f83dd34c082e56c081cf0ed0eae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod369053b2_11b0_4e19_a77d_3ea9cf595039.slice/crio-conmon-743ff81f2c888596c4886d9773a334626ae90653aaba5ec40bae715477ec071c.scope\": RecentStats: unable to find data in memory cache]" Jan 23 17:56:11 crc kubenswrapper[4718]: I0123 17:56:11.390514 4718 generic.go:334] "Generic (PLEG): container finished" podID="18395392-bb8d-49be-9b49-950d6f32b9f6" containerID="a366ca497ff23eae3b0d9d70e9f3336d685313197f5ccd1b5413aceedd2072c9" exitCode=1 Jan 23 17:56:11 crc kubenswrapper[4718]: I0123 17:56:11.390576 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" event={"ID":"18395392-bb8d-49be-9b49-950d6f32b9f6","Type":"ContainerDied","Data":"a366ca497ff23eae3b0d9d70e9f3336d685313197f5ccd1b5413aceedd2072c9"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.466873 4718 scope.go:117] "RemoveContainer" containerID="a366ca497ff23eae3b0d9d70e9f3336d685313197f5ccd1b5413aceedd2072c9" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.473991 4718 generic.go:334] "Generic (PLEG): container finished" podID="addb55c8-8565-42c2-84d2-7ee7e8693a3a" containerID="a2a67946d259ad0eb4d1623bd377351a6d24bc09af85dfb9bf1d9e9a08ceafcf" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.474087 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4" event={"ID":"addb55c8-8565-42c2-84d2-7ee7e8693a3a","Type":"ContainerDied","Data":"a2a67946d259ad0eb4d1623bd377351a6d24bc09af85dfb9bf1d9e9a08ceafcf"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.504999 4718 generic.go:334] "Generic (PLEG): container finished" podID="16e17ade-97be-48d4-83d4-7ac385174edd" containerID="ba53d3fb91ce798c9a2db16e8460e2ee5bfc436dde38d6ec5d466009242a0b14" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.505118 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk" event={"ID":"16e17ade-97be-48d4-83d4-7ac385174edd","Type":"ContainerDied","Data":"ba53d3fb91ce798c9a2db16e8460e2ee5bfc436dde38d6ec5d466009242a0b14"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.517718 4718 generic.go:334] "Generic (PLEG): container finished" podID="7eb6e283-9137-4b68-88b1-9a9dccb9fcd5" containerID="933e811a6a4ca3cc9e0fb12d35f4203ca18ba4e935df79f1eeee3451b7dfd55d" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.517782 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" event={"ID":"7eb6e283-9137-4b68-88b1-9a9dccb9fcd5","Type":"ContainerDied","Data":"933e811a6a4ca3cc9e0fb12d35f4203ca18ba4e935df79f1eeee3451b7dfd55d"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.566155 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="873a6275-b8f2-4554-9c4d-f44a6629111d" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.566516 4718 generic.go:334] "Generic (PLEG): container finished" podID="0fe9ca7e-5763-4cba-afc1-94065f21f33e" containerID="fc19cc4ef38e7d6e4a162714e7865fb4fd71a442219981f84780252246ae5c59" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.566563 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-znjjw" event={"ID":"0fe9ca7e-5763-4cba-afc1-94065f21f33e","Type":"ContainerDied","Data":"fc19cc4ef38e7d6e4a162714e7865fb4fd71a442219981f84780252246ae5c59"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.583454 4718 scope.go:117] "RemoveContainer" containerID="933e811a6a4ca3cc9e0fb12d35f4203ca18ba4e935df79f1eeee3451b7dfd55d" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.604596 4718 scope.go:117] "RemoveContainer" containerID="a2a67946d259ad0eb4d1623bd377351a6d24bc09af85dfb9bf1d9e9a08ceafcf" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.613543 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.629342 4718 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.629418 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.679193 4718 generic.go:334] "Generic (PLEG): container finished" podID="f2fe0ff3-bfa2-4cc4-b85c-8bc89ca73078" containerID="7b9e89600cb9690538cf5b0ab4101d4d17d5d6e808963977d62639be9b3db8db" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.679491 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck" event={"ID":"f2fe0ff3-bfa2-4cc4-b85c-8bc89ca73078","Type":"ContainerDied","Data":"7b9e89600cb9690538cf5b0ab4101d4d17d5d6e808963977d62639be9b3db8db"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.711843 4718 scope.go:117] "RemoveContainer" containerID="ba53d3fb91ce798c9a2db16e8460e2ee5bfc436dde38d6ec5d466009242a0b14" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.742508 4718 generic.go:334] "Generic (PLEG): container finished" podID="2062a379-6201-4835-8974-24befcfbf8e0" containerID="9947294c74a66e9711cc2fefff6cd84b01e8f038de836477523ee076630beb0e" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.742579 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8" event={"ID":"2062a379-6201-4835-8974-24befcfbf8e0","Type":"ContainerDied","Data":"9947294c74a66e9711cc2fefff6cd84b01e8f038de836477523ee076630beb0e"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.795484 4718 generic.go:334] "Generic (PLEG): container finished" podID="858bcd70-b537-4da9-8ca9-27c1724ece99" containerID="b33c5929a0b3781f558c3cb8178dc1ca9f4e5ad1d5de54f83fc31428193aed74" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.795574 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc" event={"ID":"858bcd70-b537-4da9-8ca9-27c1724ece99","Type":"ContainerDied","Data":"b33c5929a0b3781f558c3cb8178dc1ca9f4e5ad1d5de54f83fc31428193aed74"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.798444 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="764a9a7e-61b2-4513-8f87-fc357857c90f" containerName="ceilometer-central-agent" probeResult="failure" output=< Jan 23 17:56:13 crc kubenswrapper[4718]: Unkown error: Expecting value: line 1 column 1 (char 0) Jan 23 17:56:13 crc kubenswrapper[4718]: > Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.798610 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.838031 4718 generic.go:334] "Generic (PLEG): container finished" podID="235aadec-9416-469c-8455-64dd1bc82a08" containerID="d1ac29f31878ae79ebec31ec143c9d5c92dc31d4c2e0ea17addcaa62966fe5ff" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.838094 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64ktf" event={"ID":"235aadec-9416-469c-8455-64dd1bc82a08","Type":"ContainerDied","Data":"d1ac29f31878ae79ebec31ec143c9d5c92dc31d4c2e0ea17addcaa62966fe5ff"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.849961 4718 scope.go:117] "RemoveContainer" containerID="fc19cc4ef38e7d6e4a162714e7865fb4fd71a442219981f84780252246ae5c59" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.873586 4718 generic.go:334] "Generic (PLEG): container finished" podID="d91ca0c9-05fb-4e8b-9581-f2c3d025c0e2" containerID="9f573ab20f54561646018560348f723cad8785c5f12cb55728acbb2a9199212a" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.878801 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2" event={"ID":"d91ca0c9-05fb-4e8b-9581-f2c3d025c0e2","Type":"ContainerDied","Data":"9f573ab20f54561646018560348f723cad8785c5f12cb55728acbb2a9199212a"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.883595 4718 scope.go:117] "RemoveContainer" containerID="6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.883897 4718 scope.go:117] "RemoveContainer" containerID="7b9e89600cb9690538cf5b0ab4101d4d17d5d6e808963977d62639be9b3db8db" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.946002 4718 generic.go:334] "Generic (PLEG): container finished" podID="9a95eff5-116c-4141-bee6-5bda12f21e11" containerID="09f3c77a6842a9fecbcf17ca6fc40bbd6c667f091d8b041a12cdba71442c5434" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.946071 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l" event={"ID":"9a95eff5-116c-4141-bee6-5bda12f21e11","Type":"ContainerDied","Data":"09f3c77a6842a9fecbcf17ca6fc40bbd6c667f091d8b041a12cdba71442c5434"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.946858 4718 trace.go:236] Trace[1191260009]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (23-Jan-2026 17:56:08.589) (total time: 3357ms): Jan 23 17:56:13 crc kubenswrapper[4718]: Trace[1191260009]: [3.357275425s] [3.357275425s] END Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.955497 4718 trace.go:236] Trace[161933055]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-index-gateway-0" (23-Jan-2026 17:56:08.719) (total time: 3236ms): Jan 23 17:56:13 crc kubenswrapper[4718]: Trace[161933055]: [3.236176081s] [3.236176081s] END Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:11.999847 4718 scope.go:117] "RemoveContainer" containerID="9947294c74a66e9711cc2fefff6cd84b01e8f038de836477523ee076630beb0e" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.000934 4718 generic.go:334] "Generic (PLEG): container finished" podID="06df7a47-9233-4957-936e-27f58aeb0000" containerID="727ba97fcc16b81596864fe6ad4ea709d90c44d08d6307117a7d9307d839a50e" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.000999 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" event={"ID":"06df7a47-9233-4957-936e-27f58aeb0000","Type":"ContainerDied","Data":"727ba97fcc16b81596864fe6ad4ea709d90c44d08d6307117a7d9307d839a50e"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.036540 4718 scope.go:117] "RemoveContainer" containerID="d1ac29f31878ae79ebec31ec143c9d5c92dc31d4c2e0ea17addcaa62966fe5ff" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.036707 4718 generic.go:334] "Generic (PLEG): container finished" podID="d869ec7c-ddd9-4e17-9154-a793539a2a00" containerID="8d638ecf3d8d146cee0b9fbeb7f3465d0679134799cbd20dcaaab9f90e6fc8ea" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.036826 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw" event={"ID":"d869ec7c-ddd9-4e17-9154-a793539a2a00","Type":"ContainerDied","Data":"8d638ecf3d8d146cee0b9fbeb7f3465d0679134799cbd20dcaaab9f90e6fc8ea"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.067355 4718 generic.go:334] "Generic (PLEG): container finished" podID="8e29e3d6-21d7-4a1a-832e-f831d884fd00" containerID="537d4660efcb30fa9d53ae34e801a4e9fdccb49606e222e8c5b93e2a8bf0f474" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.067478 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sr4hx" event={"ID":"8e29e3d6-21d7-4a1a-832e-f831d884fd00","Type":"ContainerDied","Data":"537d4660efcb30fa9d53ae34e801a4e9fdccb49606e222e8c5b93e2a8bf0f474"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.092506 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" podUID="a2118990-95a4-4a61-8c6a-3a72bdea8642" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.104470 4718 trace.go:236] Trace[1405855699]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-1" (23-Jan-2026 17:56:09.623) (total time: 2481ms): Jan 23 17:56:13 crc kubenswrapper[4718]: Trace[1405855699]: [2.481332116s] [2.481332116s] END Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.171418 4718 scope.go:117] "RemoveContainer" containerID="b33c5929a0b3781f558c3cb8178dc1ca9f4e5ad1d5de54f83fc31428193aed74" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.193718 4718 scope.go:117] "RemoveContainer" containerID="9f573ab20f54561646018560348f723cad8785c5f12cb55728acbb2a9199212a" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.205264 4718 scope.go:117] "RemoveContainer" containerID="09f3c77a6842a9fecbcf17ca6fc40bbd6c667f091d8b041a12cdba71442c5434" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.205614 4718 scope.go:117] "RemoveContainer" containerID="537d4660efcb30fa9d53ae34e801a4e9fdccb49606e222e8c5b93e2a8bf0f474" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.217992 4718 generic.go:334] "Generic (PLEG): container finished" podID="32d58a3a-df31-492e-a2c2-2f5ca31c5f90" containerID="6e89d73e544a86660f5990f301ce265ea87879dc7169ad94c57fbf1c78cb7c94" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.218112 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" event={"ID":"32d58a3a-df31-492e-a2c2-2f5ca31c5f90","Type":"ContainerDied","Data":"6e89d73e544a86660f5990f301ce265ea87879dc7169ad94c57fbf1c78cb7c94"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.243278 4718 trace.go:236] Trace[801718015]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (23-Jan-2026 17:56:09.080) (total time: 3162ms): Jan 23 17:56:13 crc kubenswrapper[4718]: Trace[801718015]: [3.162446817s] [3.162446817s] END Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.247483 4718 scope.go:117] "RemoveContainer" containerID="727ba97fcc16b81596864fe6ad4ea709d90c44d08d6307117a7d9307d839a50e" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.247889 4718 scope.go:117] "RemoveContainer" containerID="8d638ecf3d8d146cee0b9fbeb7f3465d0679134799cbd20dcaaab9f90e6fc8ea" Jan 23 17:56:13 crc kubenswrapper[4718]: E0123 17:56:12.267719 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cfce3f5_1f59_43ae_aa99_2483cfb33806.slice/crio-conmon-311cdf16450137a92bc349afc901ebfca385b8b41d003779b30f690116ed5cf3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6012879_2e20_485d_829f_3a9ec3e5bcb1.slice/crio-conmon-786010b0c726ca3176c7fb8a9778406474ba04a21b372e4986ad91eca11d603d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18395392_bb8d_49be_9b49_950d6f32b9f6.slice/crio-conmon-a366ca497ff23eae3b0d9d70e9f3336d685313197f5ccd1b5413aceedd2072c9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-conmon-6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod369053b2_11b0_4e19_a77d_3ea9cf595039.slice/crio-conmon-743ff81f2c888596c4886d9773a334626ae90653aaba5ec40bae715477ec071c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd91ca0c9_05fb_4e8b_9581_f2c3d025c0e2.slice/crio-conmon-9f573ab20f54561646018560348f723cad8785c5f12cb55728acbb2a9199212a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae7c1f40_90dd_441b_9dc5_608e1a503f4c.slice/crio-conmon-fa7fd7a29c5835685daf0e113f2eb1030f559f83dd34c082e56c081cf0ed0eae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16e17ade_97be_48d4_83d4_7ac385174edd.slice/crio-conmon-ba53d3fb91ce798c9a2db16e8460e2ee5bfc436dde38d6ec5d466009242a0b14.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod369053b2_11b0_4e19_a77d_3ea9cf595039.slice/crio-743ff81f2c888596c4886d9773a334626ae90653aaba5ec40bae715477ec071c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod858bcd70_b537_4da9_8ca9_27c1724ece99.slice/crio-conmon-b33c5929a0b3781f558c3cb8178dc1ca9f4e5ad1d5de54f83fc31428193aed74.scope\": RecentStats: unable to find data in memory cache]" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.279180 4718 generic.go:334] "Generic (PLEG): container finished" podID="3cfce3f5-1f59-43ae-aa99-2483cfb33806" containerID="311cdf16450137a92bc349afc901ebfca385b8b41d003779b30f690116ed5cf3" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.279256 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k" event={"ID":"3cfce3f5-1f59-43ae-aa99-2483cfb33806","Type":"ContainerDied","Data":"311cdf16450137a92bc349afc901ebfca385b8b41d003779b30f690116ed5cf3"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.341758 4718 generic.go:334] "Generic (PLEG): container finished" podID="8d9099e2-7f4f-42d8-8e76-d2d8347a1514" containerID="19e77281eb21db27f677af1cf4b5b4ba0ddd4a6dae9ae9a5b0d8746199450665" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.341866 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk" event={"ID":"8d9099e2-7f4f-42d8-8e76-d2d8347a1514","Type":"ContainerDied","Data":"19e77281eb21db27f677af1cf4b5b4ba0ddd4a6dae9ae9a5b0d8746199450665"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.363526 4718 scope.go:117] "RemoveContainer" containerID="6e89d73e544a86660f5990f301ce265ea87879dc7169ad94c57fbf1c78cb7c94" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.382795 4718 scope.go:117] "RemoveContainer" containerID="311cdf16450137a92bc349afc901ebfca385b8b41d003779b30f690116ed5cf3" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.393927 4718 generic.go:334] "Generic (PLEG): container finished" podID="50178034-67cf-4f8d-89bb-788c8a73a72a" containerID="9c2b9bb1d06bbc3fc3a37f8227133317586436fc3ef5a17821fdb361171c39c3" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.394014 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs" event={"ID":"50178034-67cf-4f8d-89bb-788c8a73a72a","Type":"ContainerDied","Data":"9c2b9bb1d06bbc3fc3a37f8227133317586436fc3ef5a17821fdb361171c39c3"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.428015 4718 scope.go:117] "RemoveContainer" containerID="19e77281eb21db27f677af1cf4b5b4ba0ddd4a6dae9ae9a5b0d8746199450665" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.431074 4718 generic.go:334] "Generic (PLEG): container finished" podID="9e8950bc-8213-40eb-9bb7-2e1a8c66b57b" containerID="4061d1849c7eff9eb545852440accf3fa7fe07d4c1864e155b32f0b4c051a4fb" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.431129 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg" event={"ID":"9e8950bc-8213-40eb-9bb7-2e1a8c66b57b","Type":"ContainerDied","Data":"4061d1849c7eff9eb545852440accf3fa7fe07d4c1864e155b32f0b4c051a4fb"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.473337 4718 generic.go:334] "Generic (PLEG): container finished" podID="49cc2143-a384-436e-8eef-4d7474918177" containerID="b77968f188bc7982aa26ee5dd5ac9b0479c1eba90dd5efc5859eddf74bb5c2bc" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.473411 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82" event={"ID":"49cc2143-a384-436e-8eef-4d7474918177","Type":"ContainerDied","Data":"b77968f188bc7982aa26ee5dd5ac9b0479c1eba90dd5efc5859eddf74bb5c2bc"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.504378 4718 generic.go:334] "Generic (PLEG): container finished" podID="0c42f381-34a5-4913-90b0-0bbc4e0810fd" containerID="1d5187e91a37d86972e34bebc9f92b8930cf6a5b395be479156ed1cfaf32977d" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.504485 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7549f75f-929gl" event={"ID":"0c42f381-34a5-4913-90b0-0bbc4e0810fd","Type":"ContainerDied","Data":"1d5187e91a37d86972e34bebc9f92b8930cf6a5b395be479156ed1cfaf32977d"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.514554 4718 scope.go:117] "RemoveContainer" containerID="1d5187e91a37d86972e34bebc9f92b8930cf6a5b395be479156ed1cfaf32977d" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.515760 4718 scope.go:117] "RemoveContainer" containerID="9c2b9bb1d06bbc3fc3a37f8227133317586436fc3ef5a17821fdb361171c39c3" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.525270 4718 scope.go:117] "RemoveContainer" containerID="4061d1849c7eff9eb545852440accf3fa7fe07d4c1864e155b32f0b4c051a4fb" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.538968 4718 generic.go:334] "Generic (PLEG): container finished" podID="7cd4d741-2a88-466f-a644-a1c6c62e521b" containerID="f29da7f363d1abe368b0af3a37b4994f45b65676a87f8eda0f6246aa29c16d8c" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.539063 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t" event={"ID":"7cd4d741-2a88-466f-a644-a1c6c62e521b","Type":"ContainerDied","Data":"f29da7f363d1abe368b0af3a37b4994f45b65676a87f8eda0f6246aa29c16d8c"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.540520 4718 scope.go:117] "RemoveContainer" containerID="b77968f188bc7982aa26ee5dd5ac9b0479c1eba90dd5efc5859eddf74bb5c2bc" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.598954 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="91543550-f764-468a-a1e1-980e3d08aa41" containerName="galera" probeResult="failure" output="command timed out" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.634719 4718 generic.go:334] "Generic (PLEG): container finished" podID="ae7c1f40-90dd-441b-9dc5-608e1a503f4c" containerID="fa7fd7a29c5835685daf0e113f2eb1030f559f83dd34c082e56c081cf0ed0eae" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.634774 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx" event={"ID":"ae7c1f40-90dd-441b-9dc5-608e1a503f4c","Type":"ContainerDied","Data":"fa7fd7a29c5835685daf0e113f2eb1030f559f83dd34c082e56c081cf0ed0eae"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.650712 4718 generic.go:334] "Generic (PLEG): container finished" podID="369053b2-11b0-4e19-a77d-3ea9cf595039" containerID="743ff81f2c888596c4886d9773a334626ae90653aaba5ec40bae715477ec071c" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.650764 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" event={"ID":"369053b2-11b0-4e19-a77d-3ea9cf595039","Type":"ContainerDied","Data":"743ff81f2c888596c4886d9773a334626ae90653aaba5ec40bae715477ec071c"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.653476 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="592a76d3-742f-47a0-9054-309fb2670fa3" containerName="galera" probeResult="failure" output="command timed out" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.684962 4718 generic.go:334] "Generic (PLEG): container finished" podID="a6012879-2e20-485d-829f-3a9ec3e5bcb1" containerID="786010b0c726ca3176c7fb8a9778406474ba04a21b372e4986ad91eca11d603d" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.688443 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9r22c" event={"ID":"a6012879-2e20-485d-829f-3a9ec3e5bcb1","Type":"ContainerDied","Data":"786010b0c726ca3176c7fb8a9778406474ba04a21b372e4986ad91eca11d603d"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.690964 4718 scope.go:117] "RemoveContainer" containerID="f29da7f363d1abe368b0af3a37b4994f45b65676a87f8eda0f6246aa29c16d8c" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.692551 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="592a76d3-742f-47a0-9054-309fb2670fa3" containerName="galera" probeResult="failure" output="command timed out" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.765598 4718 scope.go:117] "RemoveContainer" containerID="fa7fd7a29c5835685daf0e113f2eb1030f559f83dd34c082e56c081cf0ed0eae" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.820407 4718 scope.go:117] "RemoveContainer" containerID="743ff81f2c888596c4886d9773a334626ae90653aaba5ec40bae715477ec071c" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.863917 4718 scope.go:117] "RemoveContainer" containerID="786010b0c726ca3176c7fb8a9778406474ba04a21b372e4986ad91eca11d603d" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.866891 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"cfa855092aaa459299db7fa1bbf5593e8a38344687f51f39b203ab4373d7878d"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.867017 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="764a9a7e-61b2-4513-8f87-fc357857c90f" containerName="ceilometer-central-agent" containerID="cri-o://cfa855092aaa459299db7fa1bbf5593e8a38344687f51f39b203ab4373d7878d" gracePeriod=30 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:12.882581 4718 scope.go:117] "RemoveContainer" containerID="92dc1daaf0c1e5c1b44393e50cca2a13f185b5ad31afcc5133c6aaf29a5223cc" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:13.058261 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:13.739246 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:13.739756 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:13.804149 4718 generic.go:334] "Generic (PLEG): container finished" podID="1e5ee60b-7363-4a74-b69d-1f4f474166e0" containerID="1eaa5b902a85325bd263b644ff73d85d74f5f4c5ee33f782984ce74ae74b2e15" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:13.804266 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-8jlw4" event={"ID":"1e5ee60b-7363-4a74-b69d-1f4f474166e0","Type":"ContainerDied","Data":"1eaa5b902a85325bd263b644ff73d85d74f5f4c5ee33f782984ce74ae74b2e15"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:13.805152 4718 scope.go:117] "RemoveContainer" containerID="1eaa5b902a85325bd263b644ff73d85d74f5f4c5ee33f782984ce74ae74b2e15" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:13.850496 4718 generic.go:334] "Generic (PLEG): container finished" podID="57379fa4-b935-4095-a6c1-9e83709c5906" containerID="964dc3702fd1d7e0864c0731b0297a203fefdf7c765b6a0138c57e189a6f6980" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:13.850578 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-sntwx" event={"ID":"57379fa4-b935-4095-a6c1-9e83709c5906","Type":"ContainerDied","Data":"964dc3702fd1d7e0864c0731b0297a203fefdf7c765b6a0138c57e189a6f6980"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:13.851450 4718 scope.go:117] "RemoveContainer" containerID="964dc3702fd1d7e0864c0731b0297a203fefdf7c765b6a0138c57e189a6f6980" Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:13.888390 4718 generic.go:334] "Generic (PLEG): container finished" podID="1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3" containerID="d113c776820c66dbe9feace81178dc79aee86971190f82d2d69c16b13f966ad6" exitCode=1 Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:13.888422 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" event={"ID":"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3","Type":"ContainerDied","Data":"d113c776820c66dbe9feace81178dc79aee86971190f82d2d69c16b13f966ad6"} Jan 23 17:56:13 crc kubenswrapper[4718]: I0123 17:56:13.888955 4718 scope.go:117] "RemoveContainer" containerID="d113c776820c66dbe9feace81178dc79aee86971190f82d2d69c16b13f966ad6" Jan 23 17:56:14 crc kubenswrapper[4718]: I0123 17:56:14.615547 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" Jan 23 17:56:15 crc kubenswrapper[4718]: I0123 17:56:15.913776 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 23 17:56:15 crc kubenswrapper[4718]: I0123 17:56:15.926475 4718 generic.go:334] "Generic (PLEG): container finished" podID="764a9a7e-61b2-4513-8f87-fc357857c90f" containerID="cfa855092aaa459299db7fa1bbf5593e8a38344687f51f39b203ab4373d7878d" exitCode=0 Jan 23 17:56:15 crc kubenswrapper[4718]: I0123 17:56:15.926565 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"764a9a7e-61b2-4513-8f87-fc357857c90f","Type":"ContainerDied","Data":"cfa855092aaa459299db7fa1bbf5593e8a38344687f51f39b203ab4373d7878d"} Jan 23 17:56:15 crc kubenswrapper[4718]: I0123 17:56:15.977677 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cert-manager/cert-manager-858654f9db-8jlw4" Jan 23 17:56:16 crc kubenswrapper[4718]: I0123 17:56:16.559222 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 17:56:16 crc kubenswrapper[4718]: I0123 17:56:16.828181 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 17:56:16 crc kubenswrapper[4718]: I0123 17:56:16.946373 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk" event={"ID":"8d9099e2-7f4f-42d8-8e76-d2d8347a1514","Type":"ContainerStarted","Data":"eddb909ab563b038b0752b47fbd78f757b1937c5776c5d0a133333321093b680"} Jan 23 17:56:16 crc kubenswrapper[4718]: I0123 17:56:16.947934 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk" Jan 23 17:56:16 crc kubenswrapper[4718]: I0123 17:56:16.955402 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sr4hx" event={"ID":"8e29e3d6-21d7-4a1a-832e-f831d884fd00","Type":"ContainerStarted","Data":"001de13ecb8a8f9691cc000e3d2b3d3b8f159c474687b51b65f5638c6de6cb21"} Jan 23 17:56:16 crc kubenswrapper[4718]: I0123 17:56:16.955879 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sr4hx" Jan 23 17:56:16 crc kubenswrapper[4718]: I0123 17:56:16.958959 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 23 17:56:16 crc kubenswrapper[4718]: I0123 17:56:16.959534 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"087d02a130f3676f046f2b91bab5def89b54af7366977ba83591ab64cab2ff34"} Jan 23 17:56:16 crc kubenswrapper[4718]: I0123 17:56:16.966550 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64ktf" event={"ID":"235aadec-9416-469c-8455-64dd1bc82a08","Type":"ContainerStarted","Data":"941af478763f40d3c73bb59427f9bde2c34349954f645751e0739a3b7d3b6cb0"} Jan 23 17:56:16 crc kubenswrapper[4718]: I0123 17:56:16.968703 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk" event={"ID":"16e17ade-97be-48d4-83d4-7ac385174edd","Type":"ContainerStarted","Data":"72a3eadfd3ea4f1a3bd842d0b5e09d216b04954788f34a6b9de05ec8a9273694"} Jan 23 17:56:16 crc kubenswrapper[4718]: I0123 17:56:16.969466 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk" Jan 23 17:56:16 crc kubenswrapper[4718]: I0123 17:56:16.976812 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4" event={"ID":"addb55c8-8565-42c2-84d2-7ee7e8693a3a","Type":"ContainerStarted","Data":"49b1a5fdb18329120bff554a9d16b4e173ccb4aeb336fc11cff7c78209a89185"} Jan 23 17:56:16 crc kubenswrapper[4718]: I0123 17:56:16.977178 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4" Jan 23 17:56:16 crc kubenswrapper[4718]: I0123 17:56:16.991704 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k" event={"ID":"3cfce3f5-1f59-43ae-aa99-2483cfb33806","Type":"ContainerStarted","Data":"7c1dc77a3e5384d48ffe2ae1ea8db5d844c628ad17ef4cf9cca4c6e8c4e7de44"} Jan 23 17:56:16 crc kubenswrapper[4718]: I0123 17:56:16.992467 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k" Jan 23 17:56:16 crc kubenswrapper[4718]: I0123 17:56:16.997809 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc" event={"ID":"858bcd70-b537-4da9-8ca9-27c1724ece99","Type":"ContainerStarted","Data":"c0a242b0a8f3df879edd963a324fa9a9c701c29e922c54128cc4e2b15f805c33"} Jan 23 17:56:16 crc kubenswrapper[4718]: I0123 17:56:16.998114 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc" Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.002755 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8" event={"ID":"2062a379-6201-4835-8974-24befcfbf8e0","Type":"ContainerStarted","Data":"435bc96d42a6c8c4711323791ec42868a6dfbf7d3a8a43ba94dbf5503f1269fc"} Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.003856 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8" Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.007603 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" event={"ID":"7eb6e283-9137-4b68-88b1-9a9dccb9fcd5","Type":"ContainerStarted","Data":"56497e8812d3606fef95dff88e77767dcf58d13eb74c7dd4fa0b25d4c857dfc9"} Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.007748 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.012229 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-znjjw" event={"ID":"0fe9ca7e-5763-4cba-afc1-94065f21f33e","Type":"ContainerStarted","Data":"9aead22b8c3b3ca5575049e258e9f19ece2e6e19943001423205d3abb7d54cf9"} Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.012480 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-znjjw" Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.021898 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" event={"ID":"369053b2-11b0-4e19-a77d-3ea9cf595039","Type":"ContainerStarted","Data":"257a0d7b767aea5a6ea944d53c3bb085ccc21c3284dc34ce1e1353de71a52c5d"} Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.022135 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.026983 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9r22c" event={"ID":"a6012879-2e20-485d-829f-3a9ec3e5bcb1","Type":"ContainerStarted","Data":"697e79d846b88cf55fdacfcd0aed7e899b9314237ebd4692284d6a3868629e63"} Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.027187 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9r22c" Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.035450 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.038148 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a76e0d622e7931657dbeb51a40509e958389d2ce35a79b2403affd0d5701cdab"} Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.043707 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw" event={"ID":"d869ec7c-ddd9-4e17-9154-a793539a2a00","Type":"ContainerStarted","Data":"7f97df327923760d1403f6850dadbff379372b52234e843bfaa83655487bcfd4"} Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.044315 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw" Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.051237 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck" event={"ID":"f2fe0ff3-bfa2-4cc4-b85c-8bc89ca73078","Type":"ContainerStarted","Data":"89aaef45103a83a47a7d2ce1533d50ac3ce8f3934058d04dabce196e637a935c"} Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.052275 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck" Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.062930 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7549f75f-929gl" Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.063028 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-init-7549f75f-929gl" Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.547183 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg" Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.609413 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2" Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.892110 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs" Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.924301 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.949652 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l" Jan 23 17:56:17 crc kubenswrapper[4718]: I0123 17:56:17.971514 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx" Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.064152 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg" event={"ID":"9e8950bc-8213-40eb-9bb7-2e1a8c66b57b","Type":"ContainerStarted","Data":"7ac6c911e338412895912e3fc439e12bae7be5ad4931d5ed917b8a7a9e54ca31"} Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.064348 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg" Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.067832 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"764a9a7e-61b2-4513-8f87-fc357857c90f","Type":"ContainerStarted","Data":"c8ed9eaa4f994ced3f2e6d09c9fb0b87ff237a81142b93ab70c18b4982f6c027"} Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.070571 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx" event={"ID":"ae7c1f40-90dd-441b-9dc5-608e1a503f4c","Type":"ContainerStarted","Data":"133c0f1c055c1d57a41d777fbcc346c0ef13eddbfab1e30ca167c96d0870f0bf"} Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.071447 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx" Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.074810 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-8jlw4" event={"ID":"1e5ee60b-7363-4a74-b69d-1f4f474166e0","Type":"ContainerStarted","Data":"c2531cbf2f49ee207e05a9d0c41f1e073336a3b9d25e7a71ca65a7373c430944"} Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.083479 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2" event={"ID":"d91ca0c9-05fb-4e8b-9581-f2c3d025c0e2","Type":"ContainerStarted","Data":"100b829119478ae0c887cdfef28edc7e94e6ebe0b43bb7a88f54af5856020e19"} Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.083946 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2" Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.089076 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" event={"ID":"1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3","Type":"ContainerStarted","Data":"b95bc8e3ceb5d59c8b4bda4fc90e1c208bd80ff7135689f62944518fbcc25891"} Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.090153 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.096255 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" event={"ID":"32d58a3a-df31-492e-a2c2-2f5ca31c5f90","Type":"ContainerStarted","Data":"3bc969977afb99767911f7e7d1e9ab3b0b35924d74852118a53a59fc9f800b84"} Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.096352 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.100360 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82" event={"ID":"49cc2143-a384-436e-8eef-4d7474918177","Type":"ContainerStarted","Data":"ac40fd261eced2bfffe04e8cb5c1a8160d9518baa0396fa1a3fd477d2e89ec0f"} Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.101675 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82" Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.109963 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-sntwx" event={"ID":"57379fa4-b935-4095-a6c1-9e83709c5906","Type":"ContainerStarted","Data":"c7c22f361e21094b35557fa21f8b137fe55f69f9077cffa66024856f6f5af1fe"} Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.115126 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t" event={"ID":"7cd4d741-2a88-466f-a644-a1c6c62e521b","Type":"ContainerStarted","Data":"846a4a24f31620d723f4b013986146710077e9c1fc498b3d65c3556bd1c4cd3e"} Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.115648 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t" Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.119259 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" event={"ID":"06df7a47-9233-4957-936e-27f58aeb0000","Type":"ContainerStarted","Data":"a1be6336be558db68d99d274c189ef1db2c7abd82f2798b64e47a519b0c2a7e6"} Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.119415 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.123336 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7549f75f-929gl" event={"ID":"0c42f381-34a5-4913-90b0-0bbc4e0810fd","Type":"ContainerStarted","Data":"67eda869652575b0baee88b9373ea94c95457d38d3aaa86f8f4cebba014c8efc"} Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.123752 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7549f75f-929gl" Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.126242 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" event={"ID":"18395392-bb8d-49be-9b49-950d6f32b9f6","Type":"ContainerStarted","Data":"b0e7a62c8de77af52e36829d044fffcd91c8609668fbeae72112c4e37d48aa2a"} Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.126535 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.128069 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs" event={"ID":"50178034-67cf-4f8d-89bb-788c8a73a72a","Type":"ContainerStarted","Data":"04b88ee155eee99ede6c471975bac5e4286673113e01cb95f8aa1309a9cbea01"} Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.128745 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs" Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.137109 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l" event={"ID":"9a95eff5-116c-4141-bee6-5bda12f21e11","Type":"ContainerStarted","Data":"b4bbc28c01d978e3c93e57bbea3db0ae9bc88216c8c393b0b9e2a736a680ed9e"} Jan 23 17:56:18 crc kubenswrapper[4718]: I0123 17:56:18.137151 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l" Jan 23 17:56:21 crc kubenswrapper[4718]: I0123 17:56:21.199734 4718 generic.go:334] "Generic (PLEG): container finished" podID="ad5b2aea-ec41-49cb-ac4b-0497fed12dab" containerID="40526d359d65dfbe4c1eb01659505f0d50b2dafdc40139cae4984cbade47e88e" exitCode=0 Jan 23 17:56:21 crc kubenswrapper[4718]: I0123 17:56:21.200916 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" event={"ID":"ad5b2aea-ec41-49cb-ac4b-0497fed12dab","Type":"ContainerDied","Data":"40526d359d65dfbe4c1eb01659505f0d50b2dafdc40139cae4984cbade47e88e"} Jan 23 17:56:21 crc kubenswrapper[4718]: I0123 17:56:21.202098 4718 scope.go:117] "RemoveContainer" containerID="40526d359d65dfbe4c1eb01659505f0d50b2dafdc40139cae4984cbade47e88e" Jan 23 17:56:22 crc kubenswrapper[4718]: I0123 17:56:22.213131 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" event={"ID":"ad5b2aea-ec41-49cb-ac4b-0497fed12dab","Type":"ContainerStarted","Data":"7c8909555869e34baaeec301d2a496e188fc7604eb1d9be9a9c52736ca8015b0"} Jan 23 17:56:22 crc kubenswrapper[4718]: I0123 17:56:22.213997 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" Jan 23 17:56:22 crc kubenswrapper[4718]: I0123 17:56:22.214791 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7bzfg" Jan 23 17:56:22 crc kubenswrapper[4718]: E0123 17:56:22.636475 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18395392_bb8d_49be_9b49_950d6f32b9f6.slice/crio-conmon-a366ca497ff23eae3b0d9d70e9f3336d685313197f5ccd1b5413aceedd2072c9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-conmon-6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06df7a47_9233_4957_936e_27f58aeb0000.slice/crio-conmon-727ba97fcc16b81596864fe6ad4ea709d90c44d08d6307117a7d9307d839a50e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod858bcd70_b537_4da9_8ca9_27c1724ece99.slice/crio-conmon-b33c5929a0b3781f558c3cb8178dc1ca9f4e5ad1d5de54f83fc31428193aed74.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6012879_2e20_485d_829f_3a9ec3e5bcb1.slice/crio-conmon-786010b0c726ca3176c7fb8a9778406474ba04a21b372e4986ad91eca11d603d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod369053b2_11b0_4e19_a77d_3ea9cf595039.slice/crio-conmon-743ff81f2c888596c4886d9773a334626ae90653aaba5ec40bae715477ec071c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cd4d741_2a88_466f_a644_a1c6c62e521b.slice/crio-conmon-f29da7f363d1abe368b0af3a37b4994f45b65676a87f8eda0f6246aa29c16d8c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16e17ade_97be_48d4_83d4_7ac385174edd.slice/crio-conmon-ba53d3fb91ce798c9a2db16e8460e2ee5bfc436dde38d6ec5d466009242a0b14.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae7c1f40_90dd_441b_9dc5_608e1a503f4c.slice/crio-conmon-fa7fd7a29c5835685daf0e113f2eb1030f559f83dd34c082e56c081cf0ed0eae.scope\": RecentStats: unable to find data in memory cache]" Jan 23 17:56:23 crc kubenswrapper[4718]: I0123 17:56:23.468949 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-4tm4n" Jan 23 17:56:23 crc kubenswrapper[4718]: I0123 17:56:23.528690 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 17:56:24 crc kubenswrapper[4718]: I0123 17:56:24.620942 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bhf98" Jan 23 17:56:26 crc kubenswrapper[4718]: E0123 17:56:26.088413 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod858bcd70_b537_4da9_8ca9_27c1724ece99.slice/crio-conmon-b33c5929a0b3781f558c3cb8178dc1ca9f4e5ad1d5de54f83fc31428193aed74.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6012879_2e20_485d_829f_3a9ec3e5bcb1.slice/crio-conmon-786010b0c726ca3176c7fb8a9778406474ba04a21b372e4986ad91eca11d603d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod369053b2_11b0_4e19_a77d_3ea9cf595039.slice/crio-conmon-743ff81f2c888596c4886d9773a334626ae90653aaba5ec40bae715477ec071c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-conmon-6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cd4d741_2a88_466f_a644_a1c6c62e521b.slice/crio-conmon-f29da7f363d1abe368b0af3a37b4994f45b65676a87f8eda0f6246aa29c16d8c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18395392_bb8d_49be_9b49_950d6f32b9f6.slice/crio-conmon-a366ca497ff23eae3b0d9d70e9f3336d685313197f5ccd1b5413aceedd2072c9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06df7a47_9233_4957_936e_27f58aeb0000.slice/crio-conmon-727ba97fcc16b81596864fe6ad4ea709d90c44d08d6307117a7d9307d839a50e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae7c1f40_90dd_441b_9dc5_608e1a503f4c.slice/crio-conmon-fa7fd7a29c5835685daf0e113f2eb1030f559f83dd34c082e56c081cf0ed0eae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16e17ade_97be_48d4_83d4_7ac385174edd.slice/crio-conmon-ba53d3fb91ce798c9a2db16e8460e2ee5bfc436dde38d6ec5d466009242a0b14.scope\": RecentStats: unable to find data in memory cache]" Jan 23 17:56:26 crc kubenswrapper[4718]: I0123 17:56:26.559437 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 17:56:26 crc kubenswrapper[4718]: I0123 17:56:26.565676 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 17:56:26 crc kubenswrapper[4718]: I0123 17:56:26.831734 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-54c9dfbc84-hsbh6" Jan 23 17:56:27 crc kubenswrapper[4718]: I0123 17:56:27.068158 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7549f75f-929gl" Jan 23 17:56:27 crc kubenswrapper[4718]: I0123 17:56:27.277650 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 17:56:27 crc kubenswrapper[4718]: I0123 17:56:27.427306 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-62wgc" Jan 23 17:56:27 crc kubenswrapper[4718]: I0123 17:56:27.450348 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9r22c" Jan 23 17:56:27 crc kubenswrapper[4718]: I0123 17:56:27.503358 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zs7zk" Jan 23 17:56:27 crc kubenswrapper[4718]: I0123 17:56:27.549992 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jjplg" Jan 23 17:56:27 crc kubenswrapper[4718]: I0123 17:56:27.611534 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-dfwk2" Jan 23 17:56:27 crc kubenswrapper[4718]: I0123 17:56:27.721251 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sr2hw" Jan 23 17:56:27 crc kubenswrapper[4718]: I0123 17:56:27.751188 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8fsk" Jan 23 17:56:27 crc kubenswrapper[4718]: I0123 17:56:27.895752 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-nwpcs" Jan 23 17:56:27 crc kubenswrapper[4718]: I0123 17:56:27.926663 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-jbxnk" Jan 23 17:56:27 crc kubenswrapper[4718]: I0123 17:56:27.952589 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l" Jan 23 17:56:27 crc kubenswrapper[4718]: I0123 17:56:27.958269 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sr4hx" Jan 23 17:56:27 crc kubenswrapper[4718]: I0123 17:56:27.973881 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-2m5hx" Jan 23 17:56:28 crc kubenswrapper[4718]: I0123 17:56:28.038573 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kn2t8" Jan 23 17:56:28 crc kubenswrapper[4718]: I0123 17:56:28.166770 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-znjjw" Jan 23 17:56:28 crc kubenswrapper[4718]: I0123 17:56:28.265288 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-9kl82" Jan 23 17:56:28 crc kubenswrapper[4718]: I0123 17:56:28.338142 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-87q6t" Jan 23 17:56:28 crc kubenswrapper[4718]: I0123 17:56:28.383201 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7c7754d696-xthck" Jan 23 17:56:28 crc kubenswrapper[4718]: I0123 17:56:28.411998 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9vg4k" Jan 23 17:56:28 crc kubenswrapper[4718]: I0123 17:56:28.516508 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5lxl4" Jan 23 17:56:28 crc kubenswrapper[4718]: I0123 17:56:28.876104 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:56:28 crc kubenswrapper[4718]: I0123 17:56:28.876167 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:56:30 crc kubenswrapper[4718]: I0123 17:56:30.453717 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-74c6db8f6f-rkhth" Jan 23 17:56:32 crc kubenswrapper[4718]: E0123 17:56:32.930692 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae7c1f40_90dd_441b_9dc5_608e1a503f4c.slice/crio-conmon-fa7fd7a29c5835685daf0e113f2eb1030f559f83dd34c082e56c081cf0ed0eae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6012879_2e20_485d_829f_3a9ec3e5bcb1.slice/crio-conmon-786010b0c726ca3176c7fb8a9778406474ba04a21b372e4986ad91eca11d603d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-conmon-6ccf4864e070797043b4c7ab25a3c2c92ccd8bfc510c2f7cc9624b74f5d15bdc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cd4d741_2a88_466f_a644_a1c6c62e521b.slice/crio-conmon-f29da7f363d1abe368b0af3a37b4994f45b65676a87f8eda0f6246aa29c16d8c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod858bcd70_b537_4da9_8ca9_27c1724ece99.slice/crio-conmon-b33c5929a0b3781f558c3cb8178dc1ca9f4e5ad1d5de54f83fc31428193aed74.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06df7a47_9233_4957_936e_27f58aeb0000.slice/crio-conmon-727ba97fcc16b81596864fe6ad4ea709d90c44d08d6307117a7d9307d839a50e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod369053b2_11b0_4e19_a77d_3ea9cf595039.slice/crio-conmon-743ff81f2c888596c4886d9773a334626ae90653aaba5ec40bae715477ec071c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16e17ade_97be_48d4_83d4_7ac385174edd.slice/crio-conmon-ba53d3fb91ce798c9a2db16e8460e2ee5bfc436dde38d6ec5d466009242a0b14.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18395392_bb8d_49be_9b49_950d6f32b9f6.slice/crio-conmon-a366ca497ff23eae3b0d9d70e9f3336d685313197f5ccd1b5413aceedd2072c9.scope\": RecentStats: unable to find data in memory cache]" Jan 23 17:56:40 crc kubenswrapper[4718]: I0123 17:56:40.180753 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-7b964c775c-ct48x"] Jan 23 17:56:53 crc kubenswrapper[4718]: I0123 17:56:53.062028 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-85fcf7954b-5fmcn" Jan 23 17:56:58 crc kubenswrapper[4718]: I0123 17:56:58.876231 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:56:58 crc kubenswrapper[4718]: I0123 17:56:58.877786 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:56:58 crc kubenswrapper[4718]: I0123 17:56:58.878003 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 17:56:58 crc kubenswrapper[4718]: I0123 17:56:58.879174 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:56:58 crc kubenswrapper[4718]: I0123 17:56:58.879237 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" gracePeriod=600 Jan 23 17:56:59 crc kubenswrapper[4718]: E0123 17:56:59.154537 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:56:59 crc kubenswrapper[4718]: I0123 17:56:59.686152 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" exitCode=0 Jan 23 17:56:59 crc kubenswrapper[4718]: I0123 17:56:59.686199 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5"} Jan 23 17:56:59 crc kubenswrapper[4718]: I0123 17:56:59.686283 4718 scope.go:117] "RemoveContainer" containerID="b5fb94ce317661475574425319434743ab7a9ca05eb9dd4d117ff3e41f5b6647" Jan 23 17:56:59 crc kubenswrapper[4718]: I0123 17:56:59.687258 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 17:56:59 crc kubenswrapper[4718]: E0123 17:56:59.687644 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:57:05 crc kubenswrapper[4718]: I0123 17:57:05.244042 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" podUID="65a1ab25-cdad-4128-a946-070231fd85fb" containerName="oauth-openshift" containerID="cri-o://d7973a66542460db1ebb43b78a1a5d36d08eb538372ddd4c7a334f7dcda14880" gracePeriod=15 Jan 23 17:57:06 crc kubenswrapper[4718]: I0123 17:57:06.791680 4718 generic.go:334] "Generic (PLEG): container finished" podID="65a1ab25-cdad-4128-a946-070231fd85fb" containerID="d7973a66542460db1ebb43b78a1a5d36d08eb538372ddd4c7a334f7dcda14880" exitCode=0 Jan 23 17:57:06 crc kubenswrapper[4718]: I0123 17:57:06.791774 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" event={"ID":"65a1ab25-cdad-4128-a946-070231fd85fb","Type":"ContainerDied","Data":"d7973a66542460db1ebb43b78a1a5d36d08eb538372ddd4c7a334f7dcda14880"} Jan 23 17:57:06 crc kubenswrapper[4718]: I0123 17:57:06.987995 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.042208 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-99bb474d9-p6n52"] Jan 23 17:57:07 crc kubenswrapper[4718]: E0123 17:57:07.044144 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0058f1d4-3fdf-4686-8307-b92b9a6bbd93" containerName="registry-server" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.044171 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0058f1d4-3fdf-4686-8307-b92b9a6bbd93" containerName="registry-server" Jan 23 17:57:07 crc kubenswrapper[4718]: E0123 17:57:07.044213 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0058f1d4-3fdf-4686-8307-b92b9a6bbd93" containerName="extract-utilities" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.044221 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0058f1d4-3fdf-4686-8307-b92b9a6bbd93" containerName="extract-utilities" Jan 23 17:57:07 crc kubenswrapper[4718]: E0123 17:57:07.044239 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a1ab25-cdad-4128-a946-070231fd85fb" containerName="oauth-openshift" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.044246 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a1ab25-cdad-4128-a946-070231fd85fb" containerName="oauth-openshift" Jan 23 17:57:07 crc kubenswrapper[4718]: E0123 17:57:07.044264 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0058f1d4-3fdf-4686-8307-b92b9a6bbd93" containerName="extract-content" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.044270 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0058f1d4-3fdf-4686-8307-b92b9a6bbd93" containerName="extract-content" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.044580 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a1ab25-cdad-4128-a946-070231fd85fb" containerName="oauth-openshift" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.044796 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0058f1d4-3fdf-4686-8307-b92b9a6bbd93" containerName="registry-server" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.046749 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.067776 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-template-error\") pod \"65a1ab25-cdad-4128-a946-070231fd85fb\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.067955 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-session\") pod \"65a1ab25-cdad-4128-a946-070231fd85fb\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.068030 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-cliconfig\") pod \"65a1ab25-cdad-4128-a946-070231fd85fb\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.068066 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-audit-policies\") pod \"65a1ab25-cdad-4128-a946-070231fd85fb\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.068157 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-service-ca\") pod \"65a1ab25-cdad-4128-a946-070231fd85fb\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.068185 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-idp-0-file-data\") pod \"65a1ab25-cdad-4128-a946-070231fd85fb\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.068265 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-serving-cert\") pod \"65a1ab25-cdad-4128-a946-070231fd85fb\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.068290 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-template-provider-selection\") pod \"65a1ab25-cdad-4128-a946-070231fd85fb\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.068335 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65a1ab25-cdad-4128-a946-070231fd85fb-audit-dir\") pod \"65a1ab25-cdad-4128-a946-070231fd85fb\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.068407 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-router-certs\") pod \"65a1ab25-cdad-4128-a946-070231fd85fb\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.068434 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w24kn\" (UniqueName: \"kubernetes.io/projected/65a1ab25-cdad-4128-a946-070231fd85fb-kube-api-access-w24kn\") pod \"65a1ab25-cdad-4128-a946-070231fd85fb\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.068468 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-ocp-branding-template\") pod \"65a1ab25-cdad-4128-a946-070231fd85fb\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.068502 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-trusted-ca-bundle\") pod \"65a1ab25-cdad-4128-a946-070231fd85fb\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.068525 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-template-login\") pod \"65a1ab25-cdad-4128-a946-070231fd85fb\" (UID: \"65a1ab25-cdad-4128-a946-070231fd85fb\") " Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.069625 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-router-certs\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.069764 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-user-template-login\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.069797 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-session\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.069873 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-cliconfig\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.069942 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.069981 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.070008 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d052904-30d0-4397-a2bd-644e7dfb3728-audit-dir\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.070037 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2d052904-30d0-4397-a2bd-644e7dfb3728-audit-policies\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.070104 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-user-template-error\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.070206 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-serving-cert\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.070237 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjxgr\" (UniqueName: \"kubernetes.io/projected/2d052904-30d0-4397-a2bd-644e7dfb3728-kube-api-access-kjxgr\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.070318 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.072782 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "65a1ab25-cdad-4128-a946-070231fd85fb" (UID: "65a1ab25-cdad-4128-a946-070231fd85fb"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.073504 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-99bb474d9-p6n52"] Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.073492 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "65a1ab25-cdad-4128-a946-070231fd85fb" (UID: "65a1ab25-cdad-4128-a946-070231fd85fb"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.074979 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "65a1ab25-cdad-4128-a946-070231fd85fb" (UID: "65a1ab25-cdad-4128-a946-070231fd85fb"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.075386 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65a1ab25-cdad-4128-a946-070231fd85fb-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "65a1ab25-cdad-4128-a946-070231fd85fb" (UID: "65a1ab25-cdad-4128-a946-070231fd85fb"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.078550 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.078658 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-service-ca\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.078872 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.078891 4718 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.078902 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.078922 4718 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65a1ab25-cdad-4128-a946-070231fd85fb-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.087144 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "65a1ab25-cdad-4128-a946-070231fd85fb" (UID: "65a1ab25-cdad-4128-a946-070231fd85fb"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.103894 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "65a1ab25-cdad-4128-a946-070231fd85fb" (UID: "65a1ab25-cdad-4128-a946-070231fd85fb"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.104428 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "65a1ab25-cdad-4128-a946-070231fd85fb" (UID: "65a1ab25-cdad-4128-a946-070231fd85fb"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.104761 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "65a1ab25-cdad-4128-a946-070231fd85fb" (UID: "65a1ab25-cdad-4128-a946-070231fd85fb"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.105652 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "65a1ab25-cdad-4128-a946-070231fd85fb" (UID: "65a1ab25-cdad-4128-a946-070231fd85fb"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.106201 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "65a1ab25-cdad-4128-a946-070231fd85fb" (UID: "65a1ab25-cdad-4128-a946-070231fd85fb"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.106441 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "65a1ab25-cdad-4128-a946-070231fd85fb" (UID: "65a1ab25-cdad-4128-a946-070231fd85fb"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.112968 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "65a1ab25-cdad-4128-a946-070231fd85fb" (UID: "65a1ab25-cdad-4128-a946-070231fd85fb"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.115176 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a1ab25-cdad-4128-a946-070231fd85fb-kube-api-access-w24kn" (OuterVolumeSpecName: "kube-api-access-w24kn") pod "65a1ab25-cdad-4128-a946-070231fd85fb" (UID: "65a1ab25-cdad-4128-a946-070231fd85fb"). InnerVolumeSpecName "kube-api-access-w24kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.124547 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "65a1ab25-cdad-4128-a946-070231fd85fb" (UID: "65a1ab25-cdad-4128-a946-070231fd85fb"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.180798 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-serving-cert\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.180848 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjxgr\" (UniqueName: \"kubernetes.io/projected/2d052904-30d0-4397-a2bd-644e7dfb3728-kube-api-access-kjxgr\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.180901 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.180982 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181018 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-service-ca\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181049 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-router-certs\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181083 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-user-template-login\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181100 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-session\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181135 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-cliconfig\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181160 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181179 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181201 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d052904-30d0-4397-a2bd-644e7dfb3728-audit-dir\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181220 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2d052904-30d0-4397-a2bd-644e7dfb3728-audit-policies\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181252 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-user-template-error\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181367 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181380 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181390 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181401 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181411 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181422 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w24kn\" (UniqueName: \"kubernetes.io/projected/65a1ab25-cdad-4128-a946-070231fd85fb-kube-api-access-w24kn\") on node \"crc\" DevicePath \"\"" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181431 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181442 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181451 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.181461 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/65a1ab25-cdad-4128-a946-070231fd85fb-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.182049 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d052904-30d0-4397-a2bd-644e7dfb3728-audit-dir\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.189258 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.189871 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2d052904-30d0-4397-a2bd-644e7dfb3728-audit-policies\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.189860 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-session\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.190158 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-serving-cert\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.190180 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-user-template-error\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.189948 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.190296 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-cliconfig\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.190945 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-service-ca\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.192830 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-user-template-login\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.193138 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-system-router-certs\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.193305 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.193493 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2d052904-30d0-4397-a2bd-644e7dfb3728-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.200273 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjxgr\" (UniqueName: \"kubernetes.io/projected/2d052904-30d0-4397-a2bd-644e7dfb3728-kube-api-access-kjxgr\") pod \"oauth-openshift-99bb474d9-p6n52\" (UID: \"2d052904-30d0-4397-a2bd-644e7dfb3728\") " pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.374762 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.806545 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" event={"ID":"65a1ab25-cdad-4128-a946-070231fd85fb","Type":"ContainerDied","Data":"c77def6f43dc634ef12da2f785378e2b703a1a19ea118faeb97e57aff0dd0e66"} Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.806985 4718 scope.go:117] "RemoveContainer" containerID="d7973a66542460db1ebb43b78a1a5d36d08eb538372ddd4c7a334f7dcda14880" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.807201 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b964c775c-ct48x" Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.853753 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-7b964c775c-ct48x"] Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.862832 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-7b964c775c-ct48x"] Jan 23 17:57:07 crc kubenswrapper[4718]: I0123 17:57:07.892291 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-99bb474d9-p6n52"] Jan 23 17:57:08 crc kubenswrapper[4718]: I0123 17:57:08.820018 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" event={"ID":"2d052904-30d0-4397-a2bd-644e7dfb3728","Type":"ContainerStarted","Data":"cded26cc35e836d768f3847644c5a35c095ee24296e2093a753dc7d6044bbdc5"} Jan 23 17:57:08 crc kubenswrapper[4718]: I0123 17:57:08.820347 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" event={"ID":"2d052904-30d0-4397-a2bd-644e7dfb3728","Type":"ContainerStarted","Data":"2fbefe76675a8bcdfab7bc1b9a3b5dd4be22284a1701955c73226f26a4f4eaa3"} Jan 23 17:57:09 crc kubenswrapper[4718]: I0123 17:57:09.157712 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65a1ab25-cdad-4128-a946-070231fd85fb" path="/var/lib/kubelet/pods/65a1ab25-cdad-4128-a946-070231fd85fb/volumes" Jan 23 17:57:09 crc kubenswrapper[4718]: I0123 17:57:09.832490 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:09 crc kubenswrapper[4718]: I0123 17:57:09.954295 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" Jan 23 17:57:09 crc kubenswrapper[4718]: I0123 17:57:09.970625 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-99bb474d9-p6n52" podStartSLOduration=29.970204314 podStartE2EDuration="29.970204314s" podCreationTimestamp="2026-01-23 17:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:57:09.963598805 +0000 UTC m=+6031.110840856" watchObservedRunningTime="2026-01-23 17:57:09.970204314 +0000 UTC m=+6031.117446345" Jan 23 17:57:13 crc kubenswrapper[4718]: I0123 17:57:13.140308 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 17:57:13 crc kubenswrapper[4718]: E0123 17:57:13.141132 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:57:26 crc kubenswrapper[4718]: I0123 17:57:26.141443 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 17:57:26 crc kubenswrapper[4718]: E0123 17:57:26.142739 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:57:37 crc kubenswrapper[4718]: I0123 17:57:37.141977 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 17:57:37 crc kubenswrapper[4718]: E0123 17:57:37.142837 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:57:52 crc kubenswrapper[4718]: I0123 17:57:52.140440 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 17:57:52 crc kubenswrapper[4718]: E0123 17:57:52.141371 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:58:05 crc kubenswrapper[4718]: I0123 17:58:05.141230 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 17:58:05 crc kubenswrapper[4718]: E0123 17:58:05.142234 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:58:11 crc kubenswrapper[4718]: I0123 17:58:11.677586 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mc45s"] Jan 23 17:58:11 crc kubenswrapper[4718]: I0123 17:58:11.683038 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mc45s" Jan 23 17:58:11 crc kubenswrapper[4718]: I0123 17:58:11.711185 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mc45s"] Jan 23 17:58:11 crc kubenswrapper[4718]: I0123 17:58:11.713776 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766f0959-c82c-4259-a8cc-add58cff034c-utilities\") pod \"redhat-marketplace-mc45s\" (UID: \"766f0959-c82c-4259-a8cc-add58cff034c\") " pod="openshift-marketplace/redhat-marketplace-mc45s" Jan 23 17:58:11 crc kubenswrapper[4718]: I0123 17:58:11.713853 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txtcf\" (UniqueName: \"kubernetes.io/projected/766f0959-c82c-4259-a8cc-add58cff034c-kube-api-access-txtcf\") pod \"redhat-marketplace-mc45s\" (UID: \"766f0959-c82c-4259-a8cc-add58cff034c\") " pod="openshift-marketplace/redhat-marketplace-mc45s" Jan 23 17:58:11 crc kubenswrapper[4718]: I0123 17:58:11.713885 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766f0959-c82c-4259-a8cc-add58cff034c-catalog-content\") pod \"redhat-marketplace-mc45s\" (UID: \"766f0959-c82c-4259-a8cc-add58cff034c\") " pod="openshift-marketplace/redhat-marketplace-mc45s" Jan 23 17:58:11 crc kubenswrapper[4718]: I0123 17:58:11.816345 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766f0959-c82c-4259-a8cc-add58cff034c-utilities\") pod \"redhat-marketplace-mc45s\" (UID: \"766f0959-c82c-4259-a8cc-add58cff034c\") " pod="openshift-marketplace/redhat-marketplace-mc45s" Jan 23 17:58:11 crc kubenswrapper[4718]: I0123 17:58:11.816437 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txtcf\" (UniqueName: \"kubernetes.io/projected/766f0959-c82c-4259-a8cc-add58cff034c-kube-api-access-txtcf\") pod \"redhat-marketplace-mc45s\" (UID: \"766f0959-c82c-4259-a8cc-add58cff034c\") " pod="openshift-marketplace/redhat-marketplace-mc45s" Jan 23 17:58:11 crc kubenswrapper[4718]: I0123 17:58:11.816462 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766f0959-c82c-4259-a8cc-add58cff034c-catalog-content\") pod \"redhat-marketplace-mc45s\" (UID: \"766f0959-c82c-4259-a8cc-add58cff034c\") " pod="openshift-marketplace/redhat-marketplace-mc45s" Jan 23 17:58:11 crc kubenswrapper[4718]: I0123 17:58:11.817070 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766f0959-c82c-4259-a8cc-add58cff034c-utilities\") pod \"redhat-marketplace-mc45s\" (UID: \"766f0959-c82c-4259-a8cc-add58cff034c\") " pod="openshift-marketplace/redhat-marketplace-mc45s" Jan 23 17:58:11 crc kubenswrapper[4718]: I0123 17:58:11.817099 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766f0959-c82c-4259-a8cc-add58cff034c-catalog-content\") pod \"redhat-marketplace-mc45s\" (UID: \"766f0959-c82c-4259-a8cc-add58cff034c\") " pod="openshift-marketplace/redhat-marketplace-mc45s" Jan 23 17:58:11 crc kubenswrapper[4718]: I0123 17:58:11.839506 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txtcf\" (UniqueName: \"kubernetes.io/projected/766f0959-c82c-4259-a8cc-add58cff034c-kube-api-access-txtcf\") pod \"redhat-marketplace-mc45s\" (UID: \"766f0959-c82c-4259-a8cc-add58cff034c\") " pod="openshift-marketplace/redhat-marketplace-mc45s" Jan 23 17:58:12 crc kubenswrapper[4718]: I0123 17:58:12.011674 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mc45s" Jan 23 17:58:12 crc kubenswrapper[4718]: I0123 17:58:12.531495 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mc45s"] Jan 23 17:58:12 crc kubenswrapper[4718]: I0123 17:58:12.634452 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mc45s" event={"ID":"766f0959-c82c-4259-a8cc-add58cff034c","Type":"ContainerStarted","Data":"72a8f7afd6ae9d0ccbc0b0e2e222c116175eaf840caa3c68bd364202df1902eb"} Jan 23 17:58:13 crc kubenswrapper[4718]: I0123 17:58:13.651158 4718 generic.go:334] "Generic (PLEG): container finished" podID="766f0959-c82c-4259-a8cc-add58cff034c" containerID="20199efc7daa16943adca4853bb64b8a9ba874ed49b368416890baa1794f6c36" exitCode=0 Jan 23 17:58:13 crc kubenswrapper[4718]: I0123 17:58:13.651345 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mc45s" event={"ID":"766f0959-c82c-4259-a8cc-add58cff034c","Type":"ContainerDied","Data":"20199efc7daa16943adca4853bb64b8a9ba874ed49b368416890baa1794f6c36"} Jan 23 17:58:15 crc kubenswrapper[4718]: I0123 17:58:15.678717 4718 generic.go:334] "Generic (PLEG): container finished" podID="766f0959-c82c-4259-a8cc-add58cff034c" containerID="934dca73e33f24ff944a6faf58b44e8329377e9eb5de5a215432f5ecad6ad627" exitCode=0 Jan 23 17:58:15 crc kubenswrapper[4718]: I0123 17:58:15.678770 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mc45s" event={"ID":"766f0959-c82c-4259-a8cc-add58cff034c","Type":"ContainerDied","Data":"934dca73e33f24ff944a6faf58b44e8329377e9eb5de5a215432f5ecad6ad627"} Jan 23 17:58:16 crc kubenswrapper[4718]: I0123 17:58:16.706034 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mc45s" event={"ID":"766f0959-c82c-4259-a8cc-add58cff034c","Type":"ContainerStarted","Data":"714a9f73eed3c64e3a34717981ad3eb794c7d8bb51915127823ca0982797e7c4"} Jan 23 17:58:18 crc kubenswrapper[4718]: I0123 17:58:18.140655 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 17:58:18 crc kubenswrapper[4718]: E0123 17:58:18.141194 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:58:22 crc kubenswrapper[4718]: I0123 17:58:22.012546 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mc45s" Jan 23 17:58:22 crc kubenswrapper[4718]: I0123 17:58:22.012954 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mc45s" Jan 23 17:58:22 crc kubenswrapper[4718]: I0123 17:58:22.084607 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mc45s" Jan 23 17:58:22 crc kubenswrapper[4718]: I0123 17:58:22.115469 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mc45s" podStartSLOduration=8.641398867 podStartE2EDuration="11.115445454s" podCreationTimestamp="2026-01-23 17:58:11 +0000 UTC" firstStartedPulling="2026-01-23 17:58:13.653331782 +0000 UTC m=+6094.800573773" lastFinishedPulling="2026-01-23 17:58:16.127378369 +0000 UTC m=+6097.274620360" observedRunningTime="2026-01-23 17:58:16.740279746 +0000 UTC m=+6097.887521737" watchObservedRunningTime="2026-01-23 17:58:22.115445454 +0000 UTC m=+6103.262687445" Jan 23 17:58:22 crc kubenswrapper[4718]: I0123 17:58:22.857799 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mc45s" Jan 23 17:58:22 crc kubenswrapper[4718]: I0123 17:58:22.955282 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mc45s"] Jan 23 17:58:24 crc kubenswrapper[4718]: I0123 17:58:24.811002 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mc45s" podUID="766f0959-c82c-4259-a8cc-add58cff034c" containerName="registry-server" containerID="cri-o://714a9f73eed3c64e3a34717981ad3eb794c7d8bb51915127823ca0982797e7c4" gracePeriod=2 Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.377313 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mc45s" Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.479337 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txtcf\" (UniqueName: \"kubernetes.io/projected/766f0959-c82c-4259-a8cc-add58cff034c-kube-api-access-txtcf\") pod \"766f0959-c82c-4259-a8cc-add58cff034c\" (UID: \"766f0959-c82c-4259-a8cc-add58cff034c\") " Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.479497 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766f0959-c82c-4259-a8cc-add58cff034c-catalog-content\") pod \"766f0959-c82c-4259-a8cc-add58cff034c\" (UID: \"766f0959-c82c-4259-a8cc-add58cff034c\") " Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.479763 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766f0959-c82c-4259-a8cc-add58cff034c-utilities\") pod \"766f0959-c82c-4259-a8cc-add58cff034c\" (UID: \"766f0959-c82c-4259-a8cc-add58cff034c\") " Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.480992 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/766f0959-c82c-4259-a8cc-add58cff034c-utilities" (OuterVolumeSpecName: "utilities") pod "766f0959-c82c-4259-a8cc-add58cff034c" (UID: "766f0959-c82c-4259-a8cc-add58cff034c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.487523 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766f0959-c82c-4259-a8cc-add58cff034c-kube-api-access-txtcf" (OuterVolumeSpecName: "kube-api-access-txtcf") pod "766f0959-c82c-4259-a8cc-add58cff034c" (UID: "766f0959-c82c-4259-a8cc-add58cff034c"). InnerVolumeSpecName "kube-api-access-txtcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.506717 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/766f0959-c82c-4259-a8cc-add58cff034c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "766f0959-c82c-4259-a8cc-add58cff034c" (UID: "766f0959-c82c-4259-a8cc-add58cff034c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.583400 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766f0959-c82c-4259-a8cc-add58cff034c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.583436 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txtcf\" (UniqueName: \"kubernetes.io/projected/766f0959-c82c-4259-a8cc-add58cff034c-kube-api-access-txtcf\") on node \"crc\" DevicePath \"\"" Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.583448 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766f0959-c82c-4259-a8cc-add58cff034c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.824852 4718 generic.go:334] "Generic (PLEG): container finished" podID="766f0959-c82c-4259-a8cc-add58cff034c" containerID="714a9f73eed3c64e3a34717981ad3eb794c7d8bb51915127823ca0982797e7c4" exitCode=0 Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.824907 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mc45s" Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.824908 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mc45s" event={"ID":"766f0959-c82c-4259-a8cc-add58cff034c","Type":"ContainerDied","Data":"714a9f73eed3c64e3a34717981ad3eb794c7d8bb51915127823ca0982797e7c4"} Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.825024 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mc45s" event={"ID":"766f0959-c82c-4259-a8cc-add58cff034c","Type":"ContainerDied","Data":"72a8f7afd6ae9d0ccbc0b0e2e222c116175eaf840caa3c68bd364202df1902eb"} Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.825048 4718 scope.go:117] "RemoveContainer" containerID="714a9f73eed3c64e3a34717981ad3eb794c7d8bb51915127823ca0982797e7c4" Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.859480 4718 scope.go:117] "RemoveContainer" containerID="934dca73e33f24ff944a6faf58b44e8329377e9eb5de5a215432f5ecad6ad627" Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.866175 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mc45s"] Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.877396 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mc45s"] Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.895118 4718 scope.go:117] "RemoveContainer" containerID="20199efc7daa16943adca4853bb64b8a9ba874ed49b368416890baa1794f6c36" Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.954907 4718 scope.go:117] "RemoveContainer" containerID="714a9f73eed3c64e3a34717981ad3eb794c7d8bb51915127823ca0982797e7c4" Jan 23 17:58:25 crc kubenswrapper[4718]: E0123 17:58:25.956099 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"714a9f73eed3c64e3a34717981ad3eb794c7d8bb51915127823ca0982797e7c4\": container with ID starting with 714a9f73eed3c64e3a34717981ad3eb794c7d8bb51915127823ca0982797e7c4 not found: ID does not exist" containerID="714a9f73eed3c64e3a34717981ad3eb794c7d8bb51915127823ca0982797e7c4" Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.956370 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"714a9f73eed3c64e3a34717981ad3eb794c7d8bb51915127823ca0982797e7c4"} err="failed to get container status \"714a9f73eed3c64e3a34717981ad3eb794c7d8bb51915127823ca0982797e7c4\": rpc error: code = NotFound desc = could not find container \"714a9f73eed3c64e3a34717981ad3eb794c7d8bb51915127823ca0982797e7c4\": container with ID starting with 714a9f73eed3c64e3a34717981ad3eb794c7d8bb51915127823ca0982797e7c4 not found: ID does not exist" Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.956407 4718 scope.go:117] "RemoveContainer" containerID="934dca73e33f24ff944a6faf58b44e8329377e9eb5de5a215432f5ecad6ad627" Jan 23 17:58:25 crc kubenswrapper[4718]: E0123 17:58:25.956834 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"934dca73e33f24ff944a6faf58b44e8329377e9eb5de5a215432f5ecad6ad627\": container with ID starting with 934dca73e33f24ff944a6faf58b44e8329377e9eb5de5a215432f5ecad6ad627 not found: ID does not exist" containerID="934dca73e33f24ff944a6faf58b44e8329377e9eb5de5a215432f5ecad6ad627" Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.956878 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"934dca73e33f24ff944a6faf58b44e8329377e9eb5de5a215432f5ecad6ad627"} err="failed to get container status \"934dca73e33f24ff944a6faf58b44e8329377e9eb5de5a215432f5ecad6ad627\": rpc error: code = NotFound desc = could not find container \"934dca73e33f24ff944a6faf58b44e8329377e9eb5de5a215432f5ecad6ad627\": container with ID starting with 934dca73e33f24ff944a6faf58b44e8329377e9eb5de5a215432f5ecad6ad627 not found: ID does not exist" Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.956908 4718 scope.go:117] "RemoveContainer" containerID="20199efc7daa16943adca4853bb64b8a9ba874ed49b368416890baa1794f6c36" Jan 23 17:58:25 crc kubenswrapper[4718]: E0123 17:58:25.957241 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20199efc7daa16943adca4853bb64b8a9ba874ed49b368416890baa1794f6c36\": container with ID starting with 20199efc7daa16943adca4853bb64b8a9ba874ed49b368416890baa1794f6c36 not found: ID does not exist" containerID="20199efc7daa16943adca4853bb64b8a9ba874ed49b368416890baa1794f6c36" Jan 23 17:58:25 crc kubenswrapper[4718]: I0123 17:58:25.957269 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20199efc7daa16943adca4853bb64b8a9ba874ed49b368416890baa1794f6c36"} err="failed to get container status \"20199efc7daa16943adca4853bb64b8a9ba874ed49b368416890baa1794f6c36\": rpc error: code = NotFound desc = could not find container \"20199efc7daa16943adca4853bb64b8a9ba874ed49b368416890baa1794f6c36\": container with ID starting with 20199efc7daa16943adca4853bb64b8a9ba874ed49b368416890baa1794f6c36 not found: ID does not exist" Jan 23 17:58:27 crc kubenswrapper[4718]: I0123 17:58:27.160772 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="766f0959-c82c-4259-a8cc-add58cff034c" path="/var/lib/kubelet/pods/766f0959-c82c-4259-a8cc-add58cff034c/volumes" Jan 23 17:58:33 crc kubenswrapper[4718]: I0123 17:58:33.141951 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 17:58:33 crc kubenswrapper[4718]: E0123 17:58:33.143259 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:58:48 crc kubenswrapper[4718]: I0123 17:58:48.140714 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 17:58:48 crc kubenswrapper[4718]: E0123 17:58:48.141814 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:58:57 crc kubenswrapper[4718]: I0123 17:58:57.207317 4718 generic.go:334] "Generic (PLEG): container finished" podID="21700319-2dc7-41c4-8377-8ba6ef629cbb" containerID="f946573a95663ea3b2443e9e5507a3bce8a80352348f25ee91ccec54733d20be" exitCode=0 Jan 23 17:58:57 crc kubenswrapper[4718]: I0123 17:58:57.207409 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"21700319-2dc7-41c4-8377-8ba6ef629cbb","Type":"ContainerDied","Data":"f946573a95663ea3b2443e9e5507a3bce8a80352348f25ee91ccec54733d20be"} Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.646376 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.723083 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/21700319-2dc7-41c4-8377-8ba6ef629cbb-openstack-config-secret\") pod \"21700319-2dc7-41c4-8377-8ba6ef629cbb\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.723536 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/21700319-2dc7-41c4-8377-8ba6ef629cbb-ca-certs\") pod \"21700319-2dc7-41c4-8377-8ba6ef629cbb\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.723607 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21700319-2dc7-41c4-8377-8ba6ef629cbb-ssh-key\") pod \"21700319-2dc7-41c4-8377-8ba6ef629cbb\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.723691 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"21700319-2dc7-41c4-8377-8ba6ef629cbb\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.723717 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/21700319-2dc7-41c4-8377-8ba6ef629cbb-test-operator-ephemeral-workdir\") pod \"21700319-2dc7-41c4-8377-8ba6ef629cbb\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.723787 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bdr4\" (UniqueName: \"kubernetes.io/projected/21700319-2dc7-41c4-8377-8ba6ef629cbb-kube-api-access-9bdr4\") pod \"21700319-2dc7-41c4-8377-8ba6ef629cbb\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.723810 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/21700319-2dc7-41c4-8377-8ba6ef629cbb-test-operator-ephemeral-temporary\") pod \"21700319-2dc7-41c4-8377-8ba6ef629cbb\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.723838 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/21700319-2dc7-41c4-8377-8ba6ef629cbb-openstack-config\") pod \"21700319-2dc7-41c4-8377-8ba6ef629cbb\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.723907 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/21700319-2dc7-41c4-8377-8ba6ef629cbb-config-data\") pod \"21700319-2dc7-41c4-8377-8ba6ef629cbb\" (UID: \"21700319-2dc7-41c4-8377-8ba6ef629cbb\") " Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.725045 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21700319-2dc7-41c4-8377-8ba6ef629cbb-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "21700319-2dc7-41c4-8377-8ba6ef629cbb" (UID: "21700319-2dc7-41c4-8377-8ba6ef629cbb"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.725522 4718 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/21700319-2dc7-41c4-8377-8ba6ef629cbb-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.726138 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21700319-2dc7-41c4-8377-8ba6ef629cbb-config-data" (OuterVolumeSpecName: "config-data") pod "21700319-2dc7-41c4-8377-8ba6ef629cbb" (UID: "21700319-2dc7-41c4-8377-8ba6ef629cbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.743866 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21700319-2dc7-41c4-8377-8ba6ef629cbb-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "21700319-2dc7-41c4-8377-8ba6ef629cbb" (UID: "21700319-2dc7-41c4-8377-8ba6ef629cbb"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.744314 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21700319-2dc7-41c4-8377-8ba6ef629cbb-kube-api-access-9bdr4" (OuterVolumeSpecName: "kube-api-access-9bdr4") pod "21700319-2dc7-41c4-8377-8ba6ef629cbb" (UID: "21700319-2dc7-41c4-8377-8ba6ef629cbb"). InnerVolumeSpecName "kube-api-access-9bdr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.744500 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "21700319-2dc7-41c4-8377-8ba6ef629cbb" (UID: "21700319-2dc7-41c4-8377-8ba6ef629cbb"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.767103 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21700319-2dc7-41c4-8377-8ba6ef629cbb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "21700319-2dc7-41c4-8377-8ba6ef629cbb" (UID: "21700319-2dc7-41c4-8377-8ba6ef629cbb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.769879 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21700319-2dc7-41c4-8377-8ba6ef629cbb-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "21700319-2dc7-41c4-8377-8ba6ef629cbb" (UID: "21700319-2dc7-41c4-8377-8ba6ef629cbb"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.778745 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21700319-2dc7-41c4-8377-8ba6ef629cbb-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "21700319-2dc7-41c4-8377-8ba6ef629cbb" (UID: "21700319-2dc7-41c4-8377-8ba6ef629cbb"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.818250 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21700319-2dc7-41c4-8377-8ba6ef629cbb-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "21700319-2dc7-41c4-8377-8ba6ef629cbb" (UID: "21700319-2dc7-41c4-8377-8ba6ef629cbb"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.827660 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/21700319-2dc7-41c4-8377-8ba6ef629cbb-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.827705 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/21700319-2dc7-41c4-8377-8ba6ef629cbb-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.827720 4718 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/21700319-2dc7-41c4-8377-8ba6ef629cbb-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.827736 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21700319-2dc7-41c4-8377-8ba6ef629cbb-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.828047 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.828078 4718 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/21700319-2dc7-41c4-8377-8ba6ef629cbb-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.828095 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bdr4\" (UniqueName: \"kubernetes.io/projected/21700319-2dc7-41c4-8377-8ba6ef629cbb-kube-api-access-9bdr4\") on node \"crc\" DevicePath \"\"" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.828107 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/21700319-2dc7-41c4-8377-8ba6ef629cbb-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.857222 4718 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 23 17:58:58 crc kubenswrapper[4718]: I0123 17:58:58.930087 4718 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 23 17:58:59 crc kubenswrapper[4718]: I0123 17:58:59.235198 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"21700319-2dc7-41c4-8377-8ba6ef629cbb","Type":"ContainerDied","Data":"ee56bf0d0b5cc049c974efb0df2ded82bcbb26e29f2f7908ac706bc979bdeeb9"} Jan 23 17:58:59 crc kubenswrapper[4718]: I0123 17:58:59.235659 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee56bf0d0b5cc049c974efb0df2ded82bcbb26e29f2f7908ac706bc979bdeeb9" Jan 23 17:58:59 crc kubenswrapper[4718]: I0123 17:58:59.235272 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 17:59:01 crc kubenswrapper[4718]: I0123 17:59:01.142450 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 17:59:01 crc kubenswrapper[4718]: E0123 17:59:01.143244 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.181283 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 17:59:04 crc kubenswrapper[4718]: E0123 17:59:04.182658 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766f0959-c82c-4259-a8cc-add58cff034c" containerName="extract-content" Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.182685 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="766f0959-c82c-4259-a8cc-add58cff034c" containerName="extract-content" Jan 23 17:59:04 crc kubenswrapper[4718]: E0123 17:59:04.182698 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766f0959-c82c-4259-a8cc-add58cff034c" containerName="extract-utilities" Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.182709 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="766f0959-c82c-4259-a8cc-add58cff034c" containerName="extract-utilities" Jan 23 17:59:04 crc kubenswrapper[4718]: E0123 17:59:04.182734 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21700319-2dc7-41c4-8377-8ba6ef629cbb" containerName="tempest-tests-tempest-tests-runner" Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.182741 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="21700319-2dc7-41c4-8377-8ba6ef629cbb" containerName="tempest-tests-tempest-tests-runner" Jan 23 17:59:04 crc kubenswrapper[4718]: E0123 17:59:04.182795 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766f0959-c82c-4259-a8cc-add58cff034c" containerName="registry-server" Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.182804 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="766f0959-c82c-4259-a8cc-add58cff034c" containerName="registry-server" Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.183253 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="21700319-2dc7-41c4-8377-8ba6ef629cbb" containerName="tempest-tests-tempest-tests-runner" Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.183282 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="766f0959-c82c-4259-a8cc-add58cff034c" containerName="registry-server" Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.184274 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.193030 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.200895 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9zl9c" Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.270152 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"68ee45e3-d5d8-4579-85c7-d46064d4ac6c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.270283 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjmsk\" (UniqueName: \"kubernetes.io/projected/68ee45e3-d5d8-4579-85c7-d46064d4ac6c-kube-api-access-rjmsk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"68ee45e3-d5d8-4579-85c7-d46064d4ac6c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.373397 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"68ee45e3-d5d8-4579-85c7-d46064d4ac6c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.373478 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjmsk\" (UniqueName: \"kubernetes.io/projected/68ee45e3-d5d8-4579-85c7-d46064d4ac6c-kube-api-access-rjmsk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"68ee45e3-d5d8-4579-85c7-d46064d4ac6c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.374408 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"68ee45e3-d5d8-4579-85c7-d46064d4ac6c\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.398876 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjmsk\" (UniqueName: \"kubernetes.io/projected/68ee45e3-d5d8-4579-85c7-d46064d4ac6c-kube-api-access-rjmsk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"68ee45e3-d5d8-4579-85c7-d46064d4ac6c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.410253 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"68ee45e3-d5d8-4579-85c7-d46064d4ac6c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.512619 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 17:59:04 crc kubenswrapper[4718]: I0123 17:59:04.979937 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 17:59:05 crc kubenswrapper[4718]: I0123 17:59:05.300908 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"68ee45e3-d5d8-4579-85c7-d46064d4ac6c","Type":"ContainerStarted","Data":"f15f73105f7422318ca6270ba7e54ad0cc6d87aee9440b71e7b41e9c522f476a"} Jan 23 17:59:07 crc kubenswrapper[4718]: I0123 17:59:07.330397 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"68ee45e3-d5d8-4579-85c7-d46064d4ac6c","Type":"ContainerStarted","Data":"a92220ede9f852feb50e328355e9632fdaa69762ac18e1356f7e388596a9d27e"} Jan 23 17:59:07 crc kubenswrapper[4718]: I0123 17:59:07.348047 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.2611261799999998 podStartE2EDuration="3.348027746s" podCreationTimestamp="2026-01-23 17:59:04 +0000 UTC" firstStartedPulling="2026-01-23 17:59:04.991831444 +0000 UTC m=+6146.139073435" lastFinishedPulling="2026-01-23 17:59:06.07873301 +0000 UTC m=+6147.225975001" observedRunningTime="2026-01-23 17:59:07.344720006 +0000 UTC m=+6148.491961997" watchObservedRunningTime="2026-01-23 17:59:07.348027746 +0000 UTC m=+6148.495269737" Jan 23 17:59:14 crc kubenswrapper[4718]: I0123 17:59:14.140607 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 17:59:14 crc kubenswrapper[4718]: E0123 17:59:14.141788 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:59:28 crc kubenswrapper[4718]: I0123 17:59:28.141389 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 17:59:28 crc kubenswrapper[4718]: E0123 17:59:28.142255 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:59:36 crc kubenswrapper[4718]: I0123 17:59:36.218542 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zjg6c/must-gather-z7bqw"] Jan 23 17:59:36 crc kubenswrapper[4718]: I0123 17:59:36.220877 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjg6c/must-gather-z7bqw" Jan 23 17:59:36 crc kubenswrapper[4718]: I0123 17:59:36.222472 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-zjg6c"/"default-dockercfg-g2skd" Jan 23 17:59:36 crc kubenswrapper[4718]: I0123 17:59:36.224888 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zjg6c"/"kube-root-ca.crt" Jan 23 17:59:36 crc kubenswrapper[4718]: I0123 17:59:36.229013 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zjg6c"/"openshift-service-ca.crt" Jan 23 17:59:36 crc kubenswrapper[4718]: I0123 17:59:36.249535 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zjg6c/must-gather-z7bqw"] Jan 23 17:59:36 crc kubenswrapper[4718]: I0123 17:59:36.339079 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276-must-gather-output\") pod \"must-gather-z7bqw\" (UID: \"5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276\") " pod="openshift-must-gather-zjg6c/must-gather-z7bqw" Jan 23 17:59:36 crc kubenswrapper[4718]: I0123 17:59:36.339618 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw9kh\" (UniqueName: \"kubernetes.io/projected/5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276-kube-api-access-pw9kh\") pod \"must-gather-z7bqw\" (UID: \"5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276\") " pod="openshift-must-gather-zjg6c/must-gather-z7bqw" Jan 23 17:59:36 crc kubenswrapper[4718]: I0123 17:59:36.442170 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276-must-gather-output\") pod \"must-gather-z7bqw\" (UID: \"5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276\") " pod="openshift-must-gather-zjg6c/must-gather-z7bqw" Jan 23 17:59:36 crc kubenswrapper[4718]: I0123 17:59:36.442365 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw9kh\" (UniqueName: \"kubernetes.io/projected/5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276-kube-api-access-pw9kh\") pod \"must-gather-z7bqw\" (UID: \"5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276\") " pod="openshift-must-gather-zjg6c/must-gather-z7bqw" Jan 23 17:59:36 crc kubenswrapper[4718]: I0123 17:59:36.442893 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276-must-gather-output\") pod \"must-gather-z7bqw\" (UID: \"5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276\") " pod="openshift-must-gather-zjg6c/must-gather-z7bqw" Jan 23 17:59:36 crc kubenswrapper[4718]: I0123 17:59:36.464713 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw9kh\" (UniqueName: \"kubernetes.io/projected/5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276-kube-api-access-pw9kh\") pod \"must-gather-z7bqw\" (UID: \"5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276\") " pod="openshift-must-gather-zjg6c/must-gather-z7bqw" Jan 23 17:59:36 crc kubenswrapper[4718]: I0123 17:59:36.546940 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjg6c/must-gather-z7bqw" Jan 23 17:59:37 crc kubenswrapper[4718]: I0123 17:59:37.080463 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zjg6c/must-gather-z7bqw"] Jan 23 17:59:37 crc kubenswrapper[4718]: I0123 17:59:37.101299 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:59:37 crc kubenswrapper[4718]: I0123 17:59:37.721694 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zjg6c/must-gather-z7bqw" event={"ID":"5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276","Type":"ContainerStarted","Data":"4367a527684f6152058a484843f44e9c93fc57d76fc9d07f0a219b3625414d3c"} Jan 23 17:59:42 crc kubenswrapper[4718]: I0123 17:59:42.141367 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 17:59:42 crc kubenswrapper[4718]: E0123 17:59:42.142877 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 17:59:45 crc kubenswrapper[4718]: I0123 17:59:45.847088 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zjg6c/must-gather-z7bqw" event={"ID":"5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276","Type":"ContainerStarted","Data":"a573de383473c3f7c9a1d2f521a97a0b8cfd1e3c2242ef8dfe625c4601e8f6dd"} Jan 23 17:59:45 crc kubenswrapper[4718]: I0123 17:59:45.847881 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zjg6c/must-gather-z7bqw" event={"ID":"5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276","Type":"ContainerStarted","Data":"cc6c8dad84ebc8fa6076f80e2a46946876c3a8e8d16faad0c5308e0a5d9f4791"} Jan 23 17:59:45 crc kubenswrapper[4718]: I0123 17:59:45.876428 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zjg6c/must-gather-z7bqw" podStartSLOduration=1.964307567 podStartE2EDuration="9.87640047s" podCreationTimestamp="2026-01-23 17:59:36 +0000 UTC" firstStartedPulling="2026-01-23 17:59:37.100729503 +0000 UTC m=+6178.247971494" lastFinishedPulling="2026-01-23 17:59:45.012822406 +0000 UTC m=+6186.160064397" observedRunningTime="2026-01-23 17:59:45.871821625 +0000 UTC m=+6187.019063616" watchObservedRunningTime="2026-01-23 17:59:45.87640047 +0000 UTC m=+6187.023642451" Jan 23 17:59:50 crc kubenswrapper[4718]: I0123 17:59:50.399393 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zjg6c/crc-debug-gh7dl"] Jan 23 17:59:50 crc kubenswrapper[4718]: I0123 17:59:50.402688 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjg6c/crc-debug-gh7dl" Jan 23 17:59:50 crc kubenswrapper[4718]: I0123 17:59:50.471600 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn4qb\" (UniqueName: \"kubernetes.io/projected/ab08179c-f9b5-4c22-a9d2-abda4411adff-kube-api-access-rn4qb\") pod \"crc-debug-gh7dl\" (UID: \"ab08179c-f9b5-4c22-a9d2-abda4411adff\") " pod="openshift-must-gather-zjg6c/crc-debug-gh7dl" Jan 23 17:59:50 crc kubenswrapper[4718]: I0123 17:59:50.471915 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab08179c-f9b5-4c22-a9d2-abda4411adff-host\") pod \"crc-debug-gh7dl\" (UID: \"ab08179c-f9b5-4c22-a9d2-abda4411adff\") " pod="openshift-must-gather-zjg6c/crc-debug-gh7dl" Jan 23 17:59:50 crc kubenswrapper[4718]: I0123 17:59:50.574056 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab08179c-f9b5-4c22-a9d2-abda4411adff-host\") pod \"crc-debug-gh7dl\" (UID: \"ab08179c-f9b5-4c22-a9d2-abda4411adff\") " pod="openshift-must-gather-zjg6c/crc-debug-gh7dl" Jan 23 17:59:50 crc kubenswrapper[4718]: I0123 17:59:50.574177 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn4qb\" (UniqueName: \"kubernetes.io/projected/ab08179c-f9b5-4c22-a9d2-abda4411adff-kube-api-access-rn4qb\") pod \"crc-debug-gh7dl\" (UID: \"ab08179c-f9b5-4c22-a9d2-abda4411adff\") " pod="openshift-must-gather-zjg6c/crc-debug-gh7dl" Jan 23 17:59:50 crc kubenswrapper[4718]: I0123 17:59:50.601028 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn4qb\" (UniqueName: \"kubernetes.io/projected/ab08179c-f9b5-4c22-a9d2-abda4411adff-kube-api-access-rn4qb\") pod \"crc-debug-gh7dl\" (UID: \"ab08179c-f9b5-4c22-a9d2-abda4411adff\") " pod="openshift-must-gather-zjg6c/crc-debug-gh7dl" Jan 23 17:59:50 crc kubenswrapper[4718]: I0123 17:59:50.876222 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab08179c-f9b5-4c22-a9d2-abda4411adff-host\") pod \"crc-debug-gh7dl\" (UID: \"ab08179c-f9b5-4c22-a9d2-abda4411adff\") " pod="openshift-must-gather-zjg6c/crc-debug-gh7dl" Jan 23 17:59:51 crc kubenswrapper[4718]: I0123 17:59:51.028985 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjg6c/crc-debug-gh7dl" Jan 23 17:59:51 crc kubenswrapper[4718]: W0123 17:59:51.923417 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab08179c_f9b5_4c22_a9d2_abda4411adff.slice/crio-5526ff443a83a503c4df29bd779578f8fe45397d7517adc5de60226df03071fa WatchSource:0}: Error finding container 5526ff443a83a503c4df29bd779578f8fe45397d7517adc5de60226df03071fa: Status 404 returned error can't find the container with id 5526ff443a83a503c4df29bd779578f8fe45397d7517adc5de60226df03071fa Jan 23 17:59:52 crc kubenswrapper[4718]: I0123 17:59:52.952936 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zjg6c/crc-debug-gh7dl" event={"ID":"ab08179c-f9b5-4c22-a9d2-abda4411adff","Type":"ContainerStarted","Data":"5526ff443a83a503c4df29bd779578f8fe45397d7517adc5de60226df03071fa"} Jan 23 17:59:56 crc kubenswrapper[4718]: I0123 17:59:56.147608 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 17:59:56 crc kubenswrapper[4718]: E0123 17:59:56.148916 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:00:00 crc kubenswrapper[4718]: I0123 18:00:00.260185 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq"] Jan 23 18:00:00 crc kubenswrapper[4718]: I0123 18:00:00.267893 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq" Jan 23 18:00:00 crc kubenswrapper[4718]: I0123 18:00:00.272579 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 18:00:00 crc kubenswrapper[4718]: I0123 18:00:00.272942 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 18:00:00 crc kubenswrapper[4718]: I0123 18:00:00.305618 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq"] Jan 23 18:00:00 crc kubenswrapper[4718]: I0123 18:00:00.465998 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe-secret-volume\") pod \"collect-profiles-29486520-czjzq\" (UID: \"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq" Jan 23 18:00:00 crc kubenswrapper[4718]: I0123 18:00:00.466175 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c564j\" (UniqueName: \"kubernetes.io/projected/9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe-kube-api-access-c564j\") pod \"collect-profiles-29486520-czjzq\" (UID: \"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq" Jan 23 18:00:00 crc kubenswrapper[4718]: I0123 18:00:00.466294 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe-config-volume\") pod \"collect-profiles-29486520-czjzq\" (UID: \"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq" Jan 23 18:00:00 crc kubenswrapper[4718]: I0123 18:00:00.568659 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe-secret-volume\") pod \"collect-profiles-29486520-czjzq\" (UID: \"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq" Jan 23 18:00:00 crc kubenswrapper[4718]: I0123 18:00:00.570250 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c564j\" (UniqueName: \"kubernetes.io/projected/9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe-kube-api-access-c564j\") pod \"collect-profiles-29486520-czjzq\" (UID: \"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq" Jan 23 18:00:00 crc kubenswrapper[4718]: I0123 18:00:00.570614 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe-config-volume\") pod \"collect-profiles-29486520-czjzq\" (UID: \"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq" Jan 23 18:00:00 crc kubenswrapper[4718]: I0123 18:00:00.571954 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe-config-volume\") pod \"collect-profiles-29486520-czjzq\" (UID: \"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq" Jan 23 18:00:00 crc kubenswrapper[4718]: I0123 18:00:00.574973 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe-secret-volume\") pod \"collect-profiles-29486520-czjzq\" (UID: \"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq" Jan 23 18:00:00 crc kubenswrapper[4718]: I0123 18:00:00.589458 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c564j\" (UniqueName: \"kubernetes.io/projected/9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe-kube-api-access-c564j\") pod \"collect-profiles-29486520-czjzq\" (UID: \"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq" Jan 23 18:00:00 crc kubenswrapper[4718]: I0123 18:00:00.635020 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq" Jan 23 18:00:05 crc kubenswrapper[4718]: I0123 18:00:05.668895 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq"] Jan 23 18:00:06 crc kubenswrapper[4718]: I0123 18:00:06.147375 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zjg6c/crc-debug-gh7dl" event={"ID":"ab08179c-f9b5-4c22-a9d2-abda4411adff","Type":"ContainerStarted","Data":"ad97f90bfa992d1d52faac1d5960f95b69b3c3d5f7b8a214b08b9c97884c8ca7"} Jan 23 18:00:06 crc kubenswrapper[4718]: I0123 18:00:06.151118 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq" event={"ID":"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe","Type":"ContainerStarted","Data":"36006f46049e7644e917ef96dda8afacb9ee4d98d1174d2d1ad748973c8075f8"} Jan 23 18:00:06 crc kubenswrapper[4718]: I0123 18:00:06.151157 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq" event={"ID":"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe","Type":"ContainerStarted","Data":"f6def8d67bad5cff3831bd1b511b858d358300f058a38f8bb5eb73f7be09dd34"} Jan 23 18:00:06 crc kubenswrapper[4718]: I0123 18:00:06.175598 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zjg6c/crc-debug-gh7dl" podStartSLOduration=3.24995569 podStartE2EDuration="16.175580106s" podCreationTimestamp="2026-01-23 17:59:50 +0000 UTC" firstStartedPulling="2026-01-23 17:59:51.92755846 +0000 UTC m=+6193.074800451" lastFinishedPulling="2026-01-23 18:00:04.853182876 +0000 UTC m=+6206.000424867" observedRunningTime="2026-01-23 18:00:06.160441404 +0000 UTC m=+6207.307683395" watchObservedRunningTime="2026-01-23 18:00:06.175580106 +0000 UTC m=+6207.322822087" Jan 23 18:00:06 crc kubenswrapper[4718]: I0123 18:00:06.181511 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq" podStartSLOduration=6.181492947 podStartE2EDuration="6.181492947s" podCreationTimestamp="2026-01-23 18:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:00:06.179831822 +0000 UTC m=+6207.327073823" watchObservedRunningTime="2026-01-23 18:00:06.181492947 +0000 UTC m=+6207.328734938" Jan 23 18:00:07 crc kubenswrapper[4718]: I0123 18:00:07.141550 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 18:00:07 crc kubenswrapper[4718]: E0123 18:00:07.142673 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:00:07 crc kubenswrapper[4718]: I0123 18:00:07.166141 4718 generic.go:334] "Generic (PLEG): container finished" podID="9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe" containerID="36006f46049e7644e917ef96dda8afacb9ee4d98d1174d2d1ad748973c8075f8" exitCode=0 Jan 23 18:00:07 crc kubenswrapper[4718]: I0123 18:00:07.166284 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq" event={"ID":"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe","Type":"ContainerDied","Data":"36006f46049e7644e917ef96dda8afacb9ee4d98d1174d2d1ad748973c8075f8"} Jan 23 18:00:08 crc kubenswrapper[4718]: I0123 18:00:08.600483 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq" Jan 23 18:00:08 crc kubenswrapper[4718]: I0123 18:00:08.705766 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe-secret-volume\") pod \"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe\" (UID: \"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe\") " Jan 23 18:00:08 crc kubenswrapper[4718]: I0123 18:00:08.705852 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe-config-volume\") pod \"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe\" (UID: \"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe\") " Jan 23 18:00:08 crc kubenswrapper[4718]: I0123 18:00:08.705964 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c564j\" (UniqueName: \"kubernetes.io/projected/9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe-kube-api-access-c564j\") pod \"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe\" (UID: \"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe\") " Jan 23 18:00:08 crc kubenswrapper[4718]: I0123 18:00:08.706819 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe-config-volume" (OuterVolumeSpecName: "config-volume") pod "9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe" (UID: "9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:00:08 crc kubenswrapper[4718]: I0123 18:00:08.709211 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:00:08 crc kubenswrapper[4718]: I0123 18:00:08.718929 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe" (UID: "9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:00:08 crc kubenswrapper[4718]: I0123 18:00:08.732922 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe-kube-api-access-c564j" (OuterVolumeSpecName: "kube-api-access-c564j") pod "9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe" (UID: "9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe"). InnerVolumeSpecName "kube-api-access-c564j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:00:08 crc kubenswrapper[4718]: I0123 18:00:08.757153 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj"] Jan 23 18:00:08 crc kubenswrapper[4718]: I0123 18:00:08.783247 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486475-8clmj"] Jan 23 18:00:08 crc kubenswrapper[4718]: I0123 18:00:08.813953 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:00:08 crc kubenswrapper[4718]: I0123 18:00:08.813993 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c564j\" (UniqueName: \"kubernetes.io/projected/9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe-kube-api-access-c564j\") on node \"crc\" DevicePath \"\"" Jan 23 18:00:09 crc kubenswrapper[4718]: I0123 18:00:09.156733 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cd64bef-5c37-47b0-9777-0e94578f7f35" path="/var/lib/kubelet/pods/8cd64bef-5c37-47b0-9777-0e94578f7f35/volumes" Jan 23 18:00:09 crc kubenswrapper[4718]: I0123 18:00:09.190081 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq" event={"ID":"9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe","Type":"ContainerDied","Data":"f6def8d67bad5cff3831bd1b511b858d358300f058a38f8bb5eb73f7be09dd34"} Jan 23 18:00:09 crc kubenswrapper[4718]: I0123 18:00:09.190123 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6def8d67bad5cff3831bd1b511b858d358300f058a38f8bb5eb73f7be09dd34" Jan 23 18:00:09 crc kubenswrapper[4718]: I0123 18:00:09.190144 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-czjzq" Jan 23 18:00:10 crc kubenswrapper[4718]: I0123 18:00:10.298293 4718 scope.go:117] "RemoveContainer" containerID="521ea7467cdea2f006fcd603365ab000659809e990042087dfaf6ba620e2a023" Jan 23 18:00:18 crc kubenswrapper[4718]: I0123 18:00:18.140473 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 18:00:18 crc kubenswrapper[4718]: E0123 18:00:18.141412 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:00:31 crc kubenswrapper[4718]: I0123 18:00:31.141414 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 18:00:31 crc kubenswrapper[4718]: E0123 18:00:31.142265 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:00:45 crc kubenswrapper[4718]: I0123 18:00:45.143794 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 18:00:45 crc kubenswrapper[4718]: E0123 18:00:45.144697 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:00:56 crc kubenswrapper[4718]: I0123 18:00:56.824324 4718 generic.go:334] "Generic (PLEG): container finished" podID="ab08179c-f9b5-4c22-a9d2-abda4411adff" containerID="ad97f90bfa992d1d52faac1d5960f95b69b3c3d5f7b8a214b08b9c97884c8ca7" exitCode=0 Jan 23 18:00:56 crc kubenswrapper[4718]: I0123 18:00:56.824399 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zjg6c/crc-debug-gh7dl" event={"ID":"ab08179c-f9b5-4c22-a9d2-abda4411adff","Type":"ContainerDied","Data":"ad97f90bfa992d1d52faac1d5960f95b69b3c3d5f7b8a214b08b9c97884c8ca7"} Jan 23 18:00:57 crc kubenswrapper[4718]: I0123 18:00:57.964733 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjg6c/crc-debug-gh7dl" Jan 23 18:00:58 crc kubenswrapper[4718]: I0123 18:00:58.004817 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zjg6c/crc-debug-gh7dl"] Jan 23 18:00:58 crc kubenswrapper[4718]: I0123 18:00:58.014215 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zjg6c/crc-debug-gh7dl"] Jan 23 18:00:58 crc kubenswrapper[4718]: I0123 18:00:58.123398 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab08179c-f9b5-4c22-a9d2-abda4411adff-host\") pod \"ab08179c-f9b5-4c22-a9d2-abda4411adff\" (UID: \"ab08179c-f9b5-4c22-a9d2-abda4411adff\") " Jan 23 18:00:58 crc kubenswrapper[4718]: I0123 18:00:58.123609 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab08179c-f9b5-4c22-a9d2-abda4411adff-host" (OuterVolumeSpecName: "host") pod "ab08179c-f9b5-4c22-a9d2-abda4411adff" (UID: "ab08179c-f9b5-4c22-a9d2-abda4411adff"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:00:58 crc kubenswrapper[4718]: I0123 18:00:58.123683 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn4qb\" (UniqueName: \"kubernetes.io/projected/ab08179c-f9b5-4c22-a9d2-abda4411adff-kube-api-access-rn4qb\") pod \"ab08179c-f9b5-4c22-a9d2-abda4411adff\" (UID: \"ab08179c-f9b5-4c22-a9d2-abda4411adff\") " Jan 23 18:00:58 crc kubenswrapper[4718]: I0123 18:00:58.124552 4718 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab08179c-f9b5-4c22-a9d2-abda4411adff-host\") on node \"crc\" DevicePath \"\"" Jan 23 18:00:58 crc kubenswrapper[4718]: I0123 18:00:58.128986 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab08179c-f9b5-4c22-a9d2-abda4411adff-kube-api-access-rn4qb" (OuterVolumeSpecName: "kube-api-access-rn4qb") pod "ab08179c-f9b5-4c22-a9d2-abda4411adff" (UID: "ab08179c-f9b5-4c22-a9d2-abda4411adff"). InnerVolumeSpecName "kube-api-access-rn4qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:00:58 crc kubenswrapper[4718]: I0123 18:00:58.140896 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 18:00:58 crc kubenswrapper[4718]: E0123 18:00:58.141205 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:00:58 crc kubenswrapper[4718]: I0123 18:00:58.227449 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn4qb\" (UniqueName: \"kubernetes.io/projected/ab08179c-f9b5-4c22-a9d2-abda4411adff-kube-api-access-rn4qb\") on node \"crc\" DevicePath \"\"" Jan 23 18:00:58 crc kubenswrapper[4718]: I0123 18:00:58.850172 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5526ff443a83a503c4df29bd779578f8fe45397d7517adc5de60226df03071fa" Jan 23 18:00:58 crc kubenswrapper[4718]: I0123 18:00:58.850229 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjg6c/crc-debug-gh7dl" Jan 23 18:00:59 crc kubenswrapper[4718]: I0123 18:00:59.154878 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab08179c-f9b5-4c22-a9d2-abda4411adff" path="/var/lib/kubelet/pods/ab08179c-f9b5-4c22-a9d2-abda4411adff/volumes" Jan 23 18:00:59 crc kubenswrapper[4718]: I0123 18:00:59.210472 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zjg6c/crc-debug-pdrtw"] Jan 23 18:00:59 crc kubenswrapper[4718]: E0123 18:00:59.211083 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe" containerName="collect-profiles" Jan 23 18:00:59 crc kubenswrapper[4718]: I0123 18:00:59.211103 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe" containerName="collect-profiles" Jan 23 18:00:59 crc kubenswrapper[4718]: E0123 18:00:59.211124 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab08179c-f9b5-4c22-a9d2-abda4411adff" containerName="container-00" Jan 23 18:00:59 crc kubenswrapper[4718]: I0123 18:00:59.211132 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab08179c-f9b5-4c22-a9d2-abda4411adff" containerName="container-00" Jan 23 18:00:59 crc kubenswrapper[4718]: I0123 18:00:59.211403 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab08179c-f9b5-4c22-a9d2-abda4411adff" containerName="container-00" Jan 23 18:00:59 crc kubenswrapper[4718]: I0123 18:00:59.211456 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f4ca7fc-9a7e-4c35-a98d-8a0aca079efe" containerName="collect-profiles" Jan 23 18:00:59 crc kubenswrapper[4718]: I0123 18:00:59.212412 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjg6c/crc-debug-pdrtw" Jan 23 18:00:59 crc kubenswrapper[4718]: I0123 18:00:59.354248 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcmww\" (UniqueName: \"kubernetes.io/projected/20246f9b-686e-4c1e-b4b2-185e270eb8c4-kube-api-access-kcmww\") pod \"crc-debug-pdrtw\" (UID: \"20246f9b-686e-4c1e-b4b2-185e270eb8c4\") " pod="openshift-must-gather-zjg6c/crc-debug-pdrtw" Jan 23 18:00:59 crc kubenswrapper[4718]: I0123 18:00:59.354317 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20246f9b-686e-4c1e-b4b2-185e270eb8c4-host\") pod \"crc-debug-pdrtw\" (UID: \"20246f9b-686e-4c1e-b4b2-185e270eb8c4\") " pod="openshift-must-gather-zjg6c/crc-debug-pdrtw" Jan 23 18:00:59 crc kubenswrapper[4718]: I0123 18:00:59.456798 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcmww\" (UniqueName: \"kubernetes.io/projected/20246f9b-686e-4c1e-b4b2-185e270eb8c4-kube-api-access-kcmww\") pod \"crc-debug-pdrtw\" (UID: \"20246f9b-686e-4c1e-b4b2-185e270eb8c4\") " pod="openshift-must-gather-zjg6c/crc-debug-pdrtw" Jan 23 18:00:59 crc kubenswrapper[4718]: I0123 18:00:59.456849 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20246f9b-686e-4c1e-b4b2-185e270eb8c4-host\") pod \"crc-debug-pdrtw\" (UID: \"20246f9b-686e-4c1e-b4b2-185e270eb8c4\") " pod="openshift-must-gather-zjg6c/crc-debug-pdrtw" Jan 23 18:00:59 crc kubenswrapper[4718]: I0123 18:00:59.456980 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20246f9b-686e-4c1e-b4b2-185e270eb8c4-host\") pod \"crc-debug-pdrtw\" (UID: \"20246f9b-686e-4c1e-b4b2-185e270eb8c4\") " pod="openshift-must-gather-zjg6c/crc-debug-pdrtw" Jan 23 18:00:59 crc kubenswrapper[4718]: I0123 18:00:59.502547 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcmww\" (UniqueName: \"kubernetes.io/projected/20246f9b-686e-4c1e-b4b2-185e270eb8c4-kube-api-access-kcmww\") pod \"crc-debug-pdrtw\" (UID: \"20246f9b-686e-4c1e-b4b2-185e270eb8c4\") " pod="openshift-must-gather-zjg6c/crc-debug-pdrtw" Jan 23 18:00:59 crc kubenswrapper[4718]: I0123 18:00:59.538166 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjg6c/crc-debug-pdrtw" Jan 23 18:00:59 crc kubenswrapper[4718]: I0123 18:00:59.864618 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zjg6c/crc-debug-pdrtw" event={"ID":"20246f9b-686e-4c1e-b4b2-185e270eb8c4","Type":"ContainerStarted","Data":"0b1c463bb868488ff3ba51ba7ca5fd6fb7c1984b59f48077e14e017e65872f84"} Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.166001 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29486521-72c9v"] Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.168185 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486521-72c9v" Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.187985 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486521-72c9v"] Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.284690 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16bee66-a1b2-451c-81b8-d33bba2aae01-combined-ca-bundle\") pod \"keystone-cron-29486521-72c9v\" (UID: \"c16bee66-a1b2-451c-81b8-d33bba2aae01\") " pod="openstack/keystone-cron-29486521-72c9v" Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.285241 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c16bee66-a1b2-451c-81b8-d33bba2aae01-fernet-keys\") pod \"keystone-cron-29486521-72c9v\" (UID: \"c16bee66-a1b2-451c-81b8-d33bba2aae01\") " pod="openstack/keystone-cron-29486521-72c9v" Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.285459 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16bee66-a1b2-451c-81b8-d33bba2aae01-config-data\") pod \"keystone-cron-29486521-72c9v\" (UID: \"c16bee66-a1b2-451c-81b8-d33bba2aae01\") " pod="openstack/keystone-cron-29486521-72c9v" Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.285948 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4jdz\" (UniqueName: \"kubernetes.io/projected/c16bee66-a1b2-451c-81b8-d33bba2aae01-kube-api-access-p4jdz\") pod \"keystone-cron-29486521-72c9v\" (UID: \"c16bee66-a1b2-451c-81b8-d33bba2aae01\") " pod="openstack/keystone-cron-29486521-72c9v" Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.388521 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4jdz\" (UniqueName: \"kubernetes.io/projected/c16bee66-a1b2-451c-81b8-d33bba2aae01-kube-api-access-p4jdz\") pod \"keystone-cron-29486521-72c9v\" (UID: \"c16bee66-a1b2-451c-81b8-d33bba2aae01\") " pod="openstack/keystone-cron-29486521-72c9v" Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.389006 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16bee66-a1b2-451c-81b8-d33bba2aae01-combined-ca-bundle\") pod \"keystone-cron-29486521-72c9v\" (UID: \"c16bee66-a1b2-451c-81b8-d33bba2aae01\") " pod="openstack/keystone-cron-29486521-72c9v" Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.389140 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c16bee66-a1b2-451c-81b8-d33bba2aae01-fernet-keys\") pod \"keystone-cron-29486521-72c9v\" (UID: \"c16bee66-a1b2-451c-81b8-d33bba2aae01\") " pod="openstack/keystone-cron-29486521-72c9v" Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.389230 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16bee66-a1b2-451c-81b8-d33bba2aae01-config-data\") pod \"keystone-cron-29486521-72c9v\" (UID: \"c16bee66-a1b2-451c-81b8-d33bba2aae01\") " pod="openstack/keystone-cron-29486521-72c9v" Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.400526 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16bee66-a1b2-451c-81b8-d33bba2aae01-config-data\") pod \"keystone-cron-29486521-72c9v\" (UID: \"c16bee66-a1b2-451c-81b8-d33bba2aae01\") " pod="openstack/keystone-cron-29486521-72c9v" Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.411992 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16bee66-a1b2-451c-81b8-d33bba2aae01-combined-ca-bundle\") pod \"keystone-cron-29486521-72c9v\" (UID: \"c16bee66-a1b2-451c-81b8-d33bba2aae01\") " pod="openstack/keystone-cron-29486521-72c9v" Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.414092 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c16bee66-a1b2-451c-81b8-d33bba2aae01-fernet-keys\") pod \"keystone-cron-29486521-72c9v\" (UID: \"c16bee66-a1b2-451c-81b8-d33bba2aae01\") " pod="openstack/keystone-cron-29486521-72c9v" Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.418589 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4jdz\" (UniqueName: \"kubernetes.io/projected/c16bee66-a1b2-451c-81b8-d33bba2aae01-kube-api-access-p4jdz\") pod \"keystone-cron-29486521-72c9v\" (UID: \"c16bee66-a1b2-451c-81b8-d33bba2aae01\") " pod="openstack/keystone-cron-29486521-72c9v" Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.498735 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486521-72c9v" Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.878261 4718 generic.go:334] "Generic (PLEG): container finished" podID="20246f9b-686e-4c1e-b4b2-185e270eb8c4" containerID="1801e79fe8b36a48e9393136cf5935ccd3fa2f7e1cd338d76b69c686995640a8" exitCode=0 Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.878314 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zjg6c/crc-debug-pdrtw" event={"ID":"20246f9b-686e-4c1e-b4b2-185e270eb8c4","Type":"ContainerDied","Data":"1801e79fe8b36a48e9393136cf5935ccd3fa2f7e1cd338d76b69c686995640a8"} Jan 23 18:01:00 crc kubenswrapper[4718]: W0123 18:01:00.979973 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc16bee66_a1b2_451c_81b8_d33bba2aae01.slice/crio-2cd4c2c0fa127e28f9dc59995c24889eae5d17a96c1235366eb5034ad3557c48 WatchSource:0}: Error finding container 2cd4c2c0fa127e28f9dc59995c24889eae5d17a96c1235366eb5034ad3557c48: Status 404 returned error can't find the container with id 2cd4c2c0fa127e28f9dc59995c24889eae5d17a96c1235366eb5034ad3557c48 Jan 23 18:01:00 crc kubenswrapper[4718]: I0123 18:01:00.980334 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486521-72c9v"] Jan 23 18:01:01 crc kubenswrapper[4718]: I0123 18:01:01.897653 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486521-72c9v" event={"ID":"c16bee66-a1b2-451c-81b8-d33bba2aae01","Type":"ContainerStarted","Data":"b68ebbe6e8c03f296083370029042b24541a74d2a181bb52773380609d572576"} Jan 23 18:01:01 crc kubenswrapper[4718]: I0123 18:01:01.898296 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486521-72c9v" event={"ID":"c16bee66-a1b2-451c-81b8-d33bba2aae01","Type":"ContainerStarted","Data":"2cd4c2c0fa127e28f9dc59995c24889eae5d17a96c1235366eb5034ad3557c48"} Jan 23 18:01:02 crc kubenswrapper[4718]: I0123 18:01:02.040691 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjg6c/crc-debug-pdrtw" Jan 23 18:01:02 crc kubenswrapper[4718]: I0123 18:01:02.059722 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29486521-72c9v" podStartSLOduration=2.059696774 podStartE2EDuration="2.059696774s" podCreationTimestamp="2026-01-23 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:01:01.940058851 +0000 UTC m=+6263.087300852" watchObservedRunningTime="2026-01-23 18:01:02.059696774 +0000 UTC m=+6263.206938775" Jan 23 18:01:02 crc kubenswrapper[4718]: I0123 18:01:02.135408 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcmww\" (UniqueName: \"kubernetes.io/projected/20246f9b-686e-4c1e-b4b2-185e270eb8c4-kube-api-access-kcmww\") pod \"20246f9b-686e-4c1e-b4b2-185e270eb8c4\" (UID: \"20246f9b-686e-4c1e-b4b2-185e270eb8c4\") " Jan 23 18:01:02 crc kubenswrapper[4718]: I0123 18:01:02.135885 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20246f9b-686e-4c1e-b4b2-185e270eb8c4-host\") pod \"20246f9b-686e-4c1e-b4b2-185e270eb8c4\" (UID: \"20246f9b-686e-4c1e-b4b2-185e270eb8c4\") " Jan 23 18:01:02 crc kubenswrapper[4718]: I0123 18:01:02.136814 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20246f9b-686e-4c1e-b4b2-185e270eb8c4-host" (OuterVolumeSpecName: "host") pod "20246f9b-686e-4c1e-b4b2-185e270eb8c4" (UID: "20246f9b-686e-4c1e-b4b2-185e270eb8c4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:01:02 crc kubenswrapper[4718]: I0123 18:01:02.138937 4718 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20246f9b-686e-4c1e-b4b2-185e270eb8c4-host\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:02 crc kubenswrapper[4718]: I0123 18:01:02.141412 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20246f9b-686e-4c1e-b4b2-185e270eb8c4-kube-api-access-kcmww" (OuterVolumeSpecName: "kube-api-access-kcmww") pod "20246f9b-686e-4c1e-b4b2-185e270eb8c4" (UID: "20246f9b-686e-4c1e-b4b2-185e270eb8c4"). InnerVolumeSpecName "kube-api-access-kcmww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:02 crc kubenswrapper[4718]: I0123 18:01:02.243689 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcmww\" (UniqueName: \"kubernetes.io/projected/20246f9b-686e-4c1e-b4b2-185e270eb8c4-kube-api-access-kcmww\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:02 crc kubenswrapper[4718]: I0123 18:01:02.934404 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjg6c/crc-debug-pdrtw" Jan 23 18:01:02 crc kubenswrapper[4718]: I0123 18:01:02.934545 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zjg6c/crc-debug-pdrtw" event={"ID":"20246f9b-686e-4c1e-b4b2-185e270eb8c4","Type":"ContainerDied","Data":"0b1c463bb868488ff3ba51ba7ca5fd6fb7c1984b59f48077e14e017e65872f84"} Jan 23 18:01:02 crc kubenswrapper[4718]: I0123 18:01:02.939863 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b1c463bb868488ff3ba51ba7ca5fd6fb7c1984b59f48077e14e017e65872f84" Jan 23 18:01:03 crc kubenswrapper[4718]: I0123 18:01:03.367376 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zjg6c/crc-debug-pdrtw"] Jan 23 18:01:03 crc kubenswrapper[4718]: I0123 18:01:03.409496 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zjg6c/crc-debug-pdrtw"] Jan 23 18:01:04 crc kubenswrapper[4718]: I0123 18:01:04.535849 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zjg6c/crc-debug-8pbcz"] Jan 23 18:01:04 crc kubenswrapper[4718]: E0123 18:01:04.536691 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20246f9b-686e-4c1e-b4b2-185e270eb8c4" containerName="container-00" Jan 23 18:01:04 crc kubenswrapper[4718]: I0123 18:01:04.536713 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="20246f9b-686e-4c1e-b4b2-185e270eb8c4" containerName="container-00" Jan 23 18:01:04 crc kubenswrapper[4718]: I0123 18:01:04.537069 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="20246f9b-686e-4c1e-b4b2-185e270eb8c4" containerName="container-00" Jan 23 18:01:04 crc kubenswrapper[4718]: I0123 18:01:04.538087 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjg6c/crc-debug-8pbcz" Jan 23 18:01:04 crc kubenswrapper[4718]: I0123 18:01:04.618861 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zl9p\" (UniqueName: \"kubernetes.io/projected/4505ef81-9301-43a0-9b4e-12436bd34258-kube-api-access-5zl9p\") pod \"crc-debug-8pbcz\" (UID: \"4505ef81-9301-43a0-9b4e-12436bd34258\") " pod="openshift-must-gather-zjg6c/crc-debug-8pbcz" Jan 23 18:01:04 crc kubenswrapper[4718]: I0123 18:01:04.619209 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4505ef81-9301-43a0-9b4e-12436bd34258-host\") pod \"crc-debug-8pbcz\" (UID: \"4505ef81-9301-43a0-9b4e-12436bd34258\") " pod="openshift-must-gather-zjg6c/crc-debug-8pbcz" Jan 23 18:01:04 crc kubenswrapper[4718]: I0123 18:01:04.721450 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zl9p\" (UniqueName: \"kubernetes.io/projected/4505ef81-9301-43a0-9b4e-12436bd34258-kube-api-access-5zl9p\") pod \"crc-debug-8pbcz\" (UID: \"4505ef81-9301-43a0-9b4e-12436bd34258\") " pod="openshift-must-gather-zjg6c/crc-debug-8pbcz" Jan 23 18:01:04 crc kubenswrapper[4718]: I0123 18:01:04.721533 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4505ef81-9301-43a0-9b4e-12436bd34258-host\") pod \"crc-debug-8pbcz\" (UID: \"4505ef81-9301-43a0-9b4e-12436bd34258\") " pod="openshift-must-gather-zjg6c/crc-debug-8pbcz" Jan 23 18:01:04 crc kubenswrapper[4718]: I0123 18:01:04.721941 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4505ef81-9301-43a0-9b4e-12436bd34258-host\") pod \"crc-debug-8pbcz\" (UID: \"4505ef81-9301-43a0-9b4e-12436bd34258\") " pod="openshift-must-gather-zjg6c/crc-debug-8pbcz" Jan 23 18:01:04 crc kubenswrapper[4718]: I0123 18:01:04.747058 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zl9p\" (UniqueName: \"kubernetes.io/projected/4505ef81-9301-43a0-9b4e-12436bd34258-kube-api-access-5zl9p\") pod \"crc-debug-8pbcz\" (UID: \"4505ef81-9301-43a0-9b4e-12436bd34258\") " pod="openshift-must-gather-zjg6c/crc-debug-8pbcz" Jan 23 18:01:04 crc kubenswrapper[4718]: I0123 18:01:04.855842 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjg6c/crc-debug-8pbcz" Jan 23 18:01:04 crc kubenswrapper[4718]: W0123 18:01:04.897234 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4505ef81_9301_43a0_9b4e_12436bd34258.slice/crio-576f5d604acc17a3de661980f31eb33894e421329c86a174f62f320e8df525b9 WatchSource:0}: Error finding container 576f5d604acc17a3de661980f31eb33894e421329c86a174f62f320e8df525b9: Status 404 returned error can't find the container with id 576f5d604acc17a3de661980f31eb33894e421329c86a174f62f320e8df525b9 Jan 23 18:01:04 crc kubenswrapper[4718]: I0123 18:01:04.956599 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zjg6c/crc-debug-8pbcz" event={"ID":"4505ef81-9301-43a0-9b4e-12436bd34258","Type":"ContainerStarted","Data":"576f5d604acc17a3de661980f31eb33894e421329c86a174f62f320e8df525b9"} Jan 23 18:01:05 crc kubenswrapper[4718]: I0123 18:01:05.157581 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20246f9b-686e-4c1e-b4b2-185e270eb8c4" path="/var/lib/kubelet/pods/20246f9b-686e-4c1e-b4b2-185e270eb8c4/volumes" Jan 23 18:01:05 crc kubenswrapper[4718]: I0123 18:01:05.970665 4718 generic.go:334] "Generic (PLEG): container finished" podID="4505ef81-9301-43a0-9b4e-12436bd34258" containerID="f4cd5552bc16117f41d01f1689cba4ce630744fb994f1979b80f2ec100d0e93b" exitCode=0 Jan 23 18:01:05 crc kubenswrapper[4718]: I0123 18:01:05.970763 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zjg6c/crc-debug-8pbcz" event={"ID":"4505ef81-9301-43a0-9b4e-12436bd34258","Type":"ContainerDied","Data":"f4cd5552bc16117f41d01f1689cba4ce630744fb994f1979b80f2ec100d0e93b"} Jan 23 18:01:05 crc kubenswrapper[4718]: I0123 18:01:05.972844 4718 generic.go:334] "Generic (PLEG): container finished" podID="c16bee66-a1b2-451c-81b8-d33bba2aae01" containerID="b68ebbe6e8c03f296083370029042b24541a74d2a181bb52773380609d572576" exitCode=0 Jan 23 18:01:05 crc kubenswrapper[4718]: I0123 18:01:05.972878 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486521-72c9v" event={"ID":"c16bee66-a1b2-451c-81b8-d33bba2aae01","Type":"ContainerDied","Data":"b68ebbe6e8c03f296083370029042b24541a74d2a181bb52773380609d572576"} Jan 23 18:01:06 crc kubenswrapper[4718]: I0123 18:01:06.034873 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zjg6c/crc-debug-8pbcz"] Jan 23 18:01:06 crc kubenswrapper[4718]: I0123 18:01:06.047416 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zjg6c/crc-debug-8pbcz"] Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.171242 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjg6c/crc-debug-8pbcz" Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.291893 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zl9p\" (UniqueName: \"kubernetes.io/projected/4505ef81-9301-43a0-9b4e-12436bd34258-kube-api-access-5zl9p\") pod \"4505ef81-9301-43a0-9b4e-12436bd34258\" (UID: \"4505ef81-9301-43a0-9b4e-12436bd34258\") " Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.293879 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4505ef81-9301-43a0-9b4e-12436bd34258-host\") pod \"4505ef81-9301-43a0-9b4e-12436bd34258\" (UID: \"4505ef81-9301-43a0-9b4e-12436bd34258\") " Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.294192 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4505ef81-9301-43a0-9b4e-12436bd34258-host" (OuterVolumeSpecName: "host") pod "4505ef81-9301-43a0-9b4e-12436bd34258" (UID: "4505ef81-9301-43a0-9b4e-12436bd34258"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.295321 4718 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4505ef81-9301-43a0-9b4e-12436bd34258-host\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.300474 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4505ef81-9301-43a0-9b4e-12436bd34258-kube-api-access-5zl9p" (OuterVolumeSpecName: "kube-api-access-5zl9p") pod "4505ef81-9301-43a0-9b4e-12436bd34258" (UID: "4505ef81-9301-43a0-9b4e-12436bd34258"). InnerVolumeSpecName "kube-api-access-5zl9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.397340 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zl9p\" (UniqueName: \"kubernetes.io/projected/4505ef81-9301-43a0-9b4e-12436bd34258-kube-api-access-5zl9p\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.397963 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486521-72c9v" Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.499355 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16bee66-a1b2-451c-81b8-d33bba2aae01-combined-ca-bundle\") pod \"c16bee66-a1b2-451c-81b8-d33bba2aae01\" (UID: \"c16bee66-a1b2-451c-81b8-d33bba2aae01\") " Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.499397 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16bee66-a1b2-451c-81b8-d33bba2aae01-config-data\") pod \"c16bee66-a1b2-451c-81b8-d33bba2aae01\" (UID: \"c16bee66-a1b2-451c-81b8-d33bba2aae01\") " Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.499553 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c16bee66-a1b2-451c-81b8-d33bba2aae01-fernet-keys\") pod \"c16bee66-a1b2-451c-81b8-d33bba2aae01\" (UID: \"c16bee66-a1b2-451c-81b8-d33bba2aae01\") " Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.499713 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4jdz\" (UniqueName: \"kubernetes.io/projected/c16bee66-a1b2-451c-81b8-d33bba2aae01-kube-api-access-p4jdz\") pod \"c16bee66-a1b2-451c-81b8-d33bba2aae01\" (UID: \"c16bee66-a1b2-451c-81b8-d33bba2aae01\") " Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.505498 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16bee66-a1b2-451c-81b8-d33bba2aae01-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c16bee66-a1b2-451c-81b8-d33bba2aae01" (UID: "c16bee66-a1b2-451c-81b8-d33bba2aae01"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.505680 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c16bee66-a1b2-451c-81b8-d33bba2aae01-kube-api-access-p4jdz" (OuterVolumeSpecName: "kube-api-access-p4jdz") pod "c16bee66-a1b2-451c-81b8-d33bba2aae01" (UID: "c16bee66-a1b2-451c-81b8-d33bba2aae01"). InnerVolumeSpecName "kube-api-access-p4jdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.549776 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16bee66-a1b2-451c-81b8-d33bba2aae01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c16bee66-a1b2-451c-81b8-d33bba2aae01" (UID: "c16bee66-a1b2-451c-81b8-d33bba2aae01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.577776 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16bee66-a1b2-451c-81b8-d33bba2aae01-config-data" (OuterVolumeSpecName: "config-data") pod "c16bee66-a1b2-451c-81b8-d33bba2aae01" (UID: "c16bee66-a1b2-451c-81b8-d33bba2aae01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.602899 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4jdz\" (UniqueName: \"kubernetes.io/projected/c16bee66-a1b2-451c-81b8-d33bba2aae01-kube-api-access-p4jdz\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.602946 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16bee66-a1b2-451c-81b8-d33bba2aae01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.602960 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16bee66-a1b2-451c-81b8-d33bba2aae01-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:07 crc kubenswrapper[4718]: I0123 18:01:07.602974 4718 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c16bee66-a1b2-451c-81b8-d33bba2aae01-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:08 crc kubenswrapper[4718]: I0123 18:01:08.011782 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjg6c/crc-debug-8pbcz" Jan 23 18:01:08 crc kubenswrapper[4718]: I0123 18:01:08.011787 4718 scope.go:117] "RemoveContainer" containerID="f4cd5552bc16117f41d01f1689cba4ce630744fb994f1979b80f2ec100d0e93b" Jan 23 18:01:08 crc kubenswrapper[4718]: I0123 18:01:08.015144 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486521-72c9v" event={"ID":"c16bee66-a1b2-451c-81b8-d33bba2aae01","Type":"ContainerDied","Data":"2cd4c2c0fa127e28f9dc59995c24889eae5d17a96c1235366eb5034ad3557c48"} Jan 23 18:01:08 crc kubenswrapper[4718]: I0123 18:01:08.015174 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cd4c2c0fa127e28f9dc59995c24889eae5d17a96c1235366eb5034ad3557c48" Jan 23 18:01:08 crc kubenswrapper[4718]: I0123 18:01:08.015244 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486521-72c9v" Jan 23 18:01:09 crc kubenswrapper[4718]: I0123 18:01:09.155362 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4505ef81-9301-43a0-9b4e-12436bd34258" path="/var/lib/kubelet/pods/4505ef81-9301-43a0-9b4e-12436bd34258/volumes" Jan 23 18:01:12 crc kubenswrapper[4718]: I0123 18:01:12.140380 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 18:01:12 crc kubenswrapper[4718]: E0123 18:01:12.141210 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:01:25 crc kubenswrapper[4718]: I0123 18:01:25.140728 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 18:01:25 crc kubenswrapper[4718]: E0123 18:01:25.141517 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:01:31 crc kubenswrapper[4718]: I0123 18:01:31.777800 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_cd9c60ee-d51d-4435-889b-870662f44dd6/aodh-api/0.log" Jan 23 18:01:31 crc kubenswrapper[4718]: I0123 18:01:31.973738 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_cd9c60ee-d51d-4435-889b-870662f44dd6/aodh-evaluator/0.log" Jan 23 18:01:32 crc kubenswrapper[4718]: I0123 18:01:32.000530 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_cd9c60ee-d51d-4435-889b-870662f44dd6/aodh-listener/0.log" Jan 23 18:01:32 crc kubenswrapper[4718]: I0123 18:01:32.017793 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_cd9c60ee-d51d-4435-889b-870662f44dd6/aodh-notifier/0.log" Jan 23 18:01:32 crc kubenswrapper[4718]: I0123 18:01:32.210498 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b4d87654d-p9q2p_d3d50a24-2b4e-43eb-ac1a-2807554f0989/barbican-api/0.log" Jan 23 18:01:32 crc kubenswrapper[4718]: I0123 18:01:32.244739 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b4d87654d-p9q2p_d3d50a24-2b4e-43eb-ac1a-2807554f0989/barbican-api-log/0.log" Jan 23 18:01:32 crc kubenswrapper[4718]: I0123 18:01:32.356932 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-655bd97bbb-6cj47_b146c37c-0473-4db8-a743-72a7576edf59/barbican-keystone-listener/0.log" Jan 23 18:01:32 crc kubenswrapper[4718]: I0123 18:01:32.526484 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8474589d6c-tbnqc_12450ab0-8804-4354-83ff-47ca9b58bcec/barbican-worker/0.log" Jan 23 18:01:32 crc kubenswrapper[4718]: I0123 18:01:32.537410 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-655bd97bbb-6cj47_b146c37c-0473-4db8-a743-72a7576edf59/barbican-keystone-listener-log/0.log" Jan 23 18:01:32 crc kubenswrapper[4718]: I0123 18:01:32.617697 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8474589d6c-tbnqc_12450ab0-8804-4354-83ff-47ca9b58bcec/barbican-worker-log/0.log" Jan 23 18:01:32 crc kubenswrapper[4718]: I0123 18:01:32.765335 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8_d78674b0-cdd9-4a34-a2d0-b9eece735396/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:01:32 crc kubenswrapper[4718]: I0123 18:01:32.842052 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_764a9a7e-61b2-4513-8f87-fc357857c90f/ceilometer-central-agent/1.log" Jan 23 18:01:33 crc kubenswrapper[4718]: I0123 18:01:33.068520 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_764a9a7e-61b2-4513-8f87-fc357857c90f/proxy-httpd/0.log" Jan 23 18:01:33 crc kubenswrapper[4718]: I0123 18:01:33.070402 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_764a9a7e-61b2-4513-8f87-fc357857c90f/ceilometer-notification-agent/0.log" Jan 23 18:01:33 crc kubenswrapper[4718]: I0123 18:01:33.096001 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_764a9a7e-61b2-4513-8f87-fc357857c90f/ceilometer-central-agent/0.log" Jan 23 18:01:33 crc kubenswrapper[4718]: I0123 18:01:33.115925 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_764a9a7e-61b2-4513-8f87-fc357857c90f/sg-core/0.log" Jan 23 18:01:33 crc kubenswrapper[4718]: I0123 18:01:33.310260 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9fdab71e-08c8-4269-a9dd-69b152751e4d/cinder-api-log/0.log" Jan 23 18:01:33 crc kubenswrapper[4718]: I0123 18:01:33.386874 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9fdab71e-08c8-4269-a9dd-69b152751e4d/cinder-api/0.log" Jan 23 18:01:33 crc kubenswrapper[4718]: I0123 18:01:33.591251 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f1c0f246-5016-4f2f-94a8-5805981faffc/cinder-scheduler/0.log" Jan 23 18:01:33 crc kubenswrapper[4718]: I0123 18:01:33.632516 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f1c0f246-5016-4f2f-94a8-5805981faffc/probe/0.log" Jan 23 18:01:33 crc kubenswrapper[4718]: I0123 18:01:33.674644 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-s9v79_a410865c-527d-4070-8dcd-d4ef16f73c82/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:01:33 crc kubenswrapper[4718]: I0123 18:01:33.881368 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg_2fcefafb-b44e-4b47-a2f1-302f824b0dd5/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:01:33 crc kubenswrapper[4718]: I0123 18:01:33.925280 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-4pkvj_69dc82c8-1e85-459e-9580-cbc33c567be5/init/0.log" Jan 23 18:01:34 crc kubenswrapper[4718]: I0123 18:01:34.164700 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-4pkvj_69dc82c8-1e85-459e-9580-cbc33c567be5/dnsmasq-dns/0.log" Jan 23 18:01:34 crc kubenswrapper[4718]: I0123 18:01:34.167025 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-4pkvj_69dc82c8-1e85-459e-9580-cbc33c567be5/init/0.log" Jan 23 18:01:34 crc kubenswrapper[4718]: I0123 18:01:34.212655 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6_c94d0e96-185e-4f09-bb48-9fb2e6874fec/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:01:34 crc kubenswrapper[4718]: I0123 18:01:34.401869 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3948da45-04b4-4a32-b5d5-0701d87095a7/glance-httpd/0.log" Jan 23 18:01:34 crc kubenswrapper[4718]: I0123 18:01:34.417855 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3948da45-04b4-4a32-b5d5-0701d87095a7/glance-log/0.log" Jan 23 18:01:34 crc kubenswrapper[4718]: I0123 18:01:34.627165 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_47efd469-ac22-42d8-bb00-fd20450c9e7e/glance-httpd/0.log" Jan 23 18:01:34 crc kubenswrapper[4718]: I0123 18:01:34.663765 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_47efd469-ac22-42d8-bb00-fd20450c9e7e/glance-log/0.log" Jan 23 18:01:35 crc kubenswrapper[4718]: I0123 18:01:35.395138 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-95db6b64d-5qj7l_6e6107cd-49bf-4f98-a70b-715fcdcc1535/heat-api/0.log" Jan 23 18:01:35 crc kubenswrapper[4718]: I0123 18:01:35.599483 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-6bf6f4bd98-77tgt_7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8/heat-cfnapi/0.log" Jan 23 18:01:35 crc kubenswrapper[4718]: I0123 18:01:35.688102 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-k98h8_e4ba8316-551c-484b-b458-1feab6b0e72b/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:01:35 crc kubenswrapper[4718]: I0123 18:01:35.908066 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-6b6465d99d-xv658_6bb4cb4d-9614-4570-a061-73f87bc9a159/heat-engine/0.log" Jan 23 18:01:35 crc kubenswrapper[4718]: I0123 18:01:35.960874 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-kp4hx_cec28e23-37c2-4a27-872d-40cb7ad130c5/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:01:36 crc kubenswrapper[4718]: I0123 18:01:36.121749 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29486461-g5844_5f29f7cb-356b-4f33-a5db-2b6977793db4/keystone-cron/0.log" Jan 23 18:01:36 crc kubenswrapper[4718]: I0123 18:01:36.140720 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 18:01:36 crc kubenswrapper[4718]: E0123 18:01:36.141020 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:01:36 crc kubenswrapper[4718]: I0123 18:01:36.217225 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29486521-72c9v_c16bee66-a1b2-451c-81b8-d33bba2aae01/keystone-cron/0.log" Jan 23 18:01:36 crc kubenswrapper[4718]: I0123 18:01:36.473137 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_8ee3d2a0-3f10-40d9-980c-deb1bc35b613/kube-state-metrics/0.log" Jan 23 18:01:36 crc kubenswrapper[4718]: I0123 18:01:36.680483 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv_680f27a4-945b-4f46-ae19-c0b05b6f3d4c/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:01:36 crc kubenswrapper[4718]: I0123 18:01:36.752252 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-pbkhz_86526a30-7eef-4621-944a-cab9bd64903b/logging-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:01:36 crc kubenswrapper[4718]: I0123 18:01:36.968493 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-f6b6f5fd7-qpbqz_e836bdf5-8379-4f60-8dbe-7be5381ed922/keystone-api/0.log" Jan 23 18:01:37 crc kubenswrapper[4718]: I0123 18:01:37.086024 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_fa116646-6ee2-42f2-8a0f-56459516d495/mysqld-exporter/0.log" Jan 23 18:01:37 crc kubenswrapper[4718]: I0123 18:01:37.502465 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8567b78dd5-chd6w_5c367121-318c-413c-96e5-f53a105d91d3/neutron-httpd/0.log" Jan 23 18:01:37 crc kubenswrapper[4718]: I0123 18:01:37.523347 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w_90730357-1c99-420b-8ff6-f82638fbd43f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:01:37 crc kubenswrapper[4718]: I0123 18:01:37.621431 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8567b78dd5-chd6w_5c367121-318c-413c-96e5-f53a105d91d3/neutron-api/0.log" Jan 23 18:01:38 crc kubenswrapper[4718]: I0123 18:01:38.111281 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_0ac70adf-8253-4b66-91b9-beb3c28648d5/nova-cell0-conductor-conductor/0.log" Jan 23 18:01:38 crc kubenswrapper[4718]: I0123 18:01:38.494314 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_353b7e73-13e9-4989-8f55-5dedebe8e92a/nova-api-log/0.log" Jan 23 18:01:38 crc kubenswrapper[4718]: I0123 18:01:38.508365 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_7057072a-eda4-442e-9cb6-b9c2dbaebe3d/nova-cell1-conductor-conductor/0.log" Jan 23 18:01:38 crc kubenswrapper[4718]: I0123 18:01:38.823697 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-zq8vl_7354170c-e5c6-4c6e-be23-d2c6bd685aa0/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:01:38 crc kubenswrapper[4718]: I0123 18:01:38.862151 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_2ec50566-57bf-4ddf-aa36-4dfe1fa36d07/nova-cell1-novncproxy-novncproxy/0.log" Jan 23 18:01:38 crc kubenswrapper[4718]: I0123 18:01:38.997479 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_353b7e73-13e9-4989-8f55-5dedebe8e92a/nova-api-api/0.log" Jan 23 18:01:39 crc kubenswrapper[4718]: I0123 18:01:39.179934 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_82c5d1a7-2493-4399-9a20-247f71a1c754/nova-metadata-log/0.log" Jan 23 18:01:39 crc kubenswrapper[4718]: I0123 18:01:39.528446 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f249685b-e052-4a6c-b34e-28fa3fe0610a/nova-scheduler-scheduler/0.log" Jan 23 18:01:39 crc kubenswrapper[4718]: I0123 18:01:39.536668 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_91543550-f764-468a-a1e1-980e3d08aa41/mysql-bootstrap/0.log" Jan 23 18:01:39 crc kubenswrapper[4718]: I0123 18:01:39.744292 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_91543550-f764-468a-a1e1-980e3d08aa41/mysql-bootstrap/0.log" Jan 23 18:01:39 crc kubenswrapper[4718]: I0123 18:01:39.778897 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_91543550-f764-468a-a1e1-980e3d08aa41/galera/0.log" Jan 23 18:01:39 crc kubenswrapper[4718]: I0123 18:01:39.934543 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_592a76d3-742f-47a0-9054-309fb2670fa3/mysql-bootstrap/0.log" Jan 23 18:01:40 crc kubenswrapper[4718]: I0123 18:01:40.150620 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_592a76d3-742f-47a0-9054-309fb2670fa3/mysql-bootstrap/0.log" Jan 23 18:01:40 crc kubenswrapper[4718]: I0123 18:01:40.176357 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_592a76d3-742f-47a0-9054-309fb2670fa3/galera/0.log" Jan 23 18:01:40 crc kubenswrapper[4718]: I0123 18:01:40.451008 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_ed1d44ad-8796-452b-a194-17b351fc8c01/openstackclient/0.log" Jan 23 18:01:40 crc kubenswrapper[4718]: I0123 18:01:40.535551 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-c9rfg_d2e5ad0b-04cf-49a9-badc-9e3184385c5b/ovn-controller/0.log" Jan 23 18:01:40 crc kubenswrapper[4718]: I0123 18:01:40.759294 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mwsph_760d2aa5-6dd3-43b1-8447-b1d1e655ee14/openstack-network-exporter/0.log" Jan 23 18:01:40 crc kubenswrapper[4718]: I0123 18:01:40.979442 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f2xs2_a6481072-75b7-4b67-94a4-94041ef225f6/ovsdb-server-init/0.log" Jan 23 18:01:41 crc kubenswrapper[4718]: I0123 18:01:41.122740 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f2xs2_a6481072-75b7-4b67-94a4-94041ef225f6/ovsdb-server-init/0.log" Jan 23 18:01:41 crc kubenswrapper[4718]: I0123 18:01:41.126049 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f2xs2_a6481072-75b7-4b67-94a4-94041ef225f6/ovs-vswitchd/0.log" Jan 23 18:01:41 crc kubenswrapper[4718]: I0123 18:01:41.205742 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f2xs2_a6481072-75b7-4b67-94a4-94041ef225f6/ovsdb-server/0.log" Jan 23 18:01:41 crc kubenswrapper[4718]: I0123 18:01:41.387519 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-2lgn4_3c8cfb53-9d77-472b-a67e-cfe479ef8aa3/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:01:41 crc kubenswrapper[4718]: I0123 18:01:41.568595 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_873a6275-b8f2-4554-9c4d-f44a6629111d/openstack-network-exporter/0.log" Jan 23 18:01:41 crc kubenswrapper[4718]: I0123 18:01:41.581892 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_873a6275-b8f2-4554-9c4d-f44a6629111d/ovn-northd/0.log" Jan 23 18:01:41 crc kubenswrapper[4718]: I0123 18:01:41.794085 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_82c5d1a7-2493-4399-9a20-247f71a1c754/nova-metadata-metadata/0.log" Jan 23 18:01:41 crc kubenswrapper[4718]: I0123 18:01:41.806716 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cf53eabe-609c-471c-ae7e-ca9fb950f86e/ovsdbserver-nb/0.log" Jan 23 18:01:41 crc kubenswrapper[4718]: I0123 18:01:41.810104 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cf53eabe-609c-471c-ae7e-ca9fb950f86e/openstack-network-exporter/0.log" Jan 23 18:01:42 crc kubenswrapper[4718]: I0123 18:01:42.060114 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_28af9db0-905d-46cc-8ab9-887e0f58ee9b/ovsdbserver-sb/0.log" Jan 23 18:01:42 crc kubenswrapper[4718]: I0123 18:01:42.063434 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_28af9db0-905d-46cc-8ab9-887e0f58ee9b/openstack-network-exporter/0.log" Jan 23 18:01:42 crc kubenswrapper[4718]: I0123 18:01:42.337228 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9a386a29-4af1-4f01-ac73-771210f5a97f/init-config-reloader/0.log" Jan 23 18:01:42 crc kubenswrapper[4718]: I0123 18:01:42.411557 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6dffc5fb8-5997w_4878ef2b-0a67-424e-95b7-53803746d9f3/placement-api/0.log" Jan 23 18:01:42 crc kubenswrapper[4718]: I0123 18:01:42.514404 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6dffc5fb8-5997w_4878ef2b-0a67-424e-95b7-53803746d9f3/placement-log/0.log" Jan 23 18:01:42 crc kubenswrapper[4718]: I0123 18:01:42.630972 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9a386a29-4af1-4f01-ac73-771210f5a97f/init-config-reloader/0.log" Jan 23 18:01:42 crc kubenswrapper[4718]: I0123 18:01:42.640584 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9a386a29-4af1-4f01-ac73-771210f5a97f/config-reloader/0.log" Jan 23 18:01:42 crc kubenswrapper[4718]: I0123 18:01:42.693984 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9a386a29-4af1-4f01-ac73-771210f5a97f/prometheus/0.log" Jan 23 18:01:42 crc kubenswrapper[4718]: I0123 18:01:42.776791 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9a386a29-4af1-4f01-ac73-771210f5a97f/thanos-sidecar/0.log" Jan 23 18:01:42 crc kubenswrapper[4718]: I0123 18:01:42.884919 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bca04db1-8e77-405e-b8ef-656cf882136c/setup-container/0.log" Jan 23 18:01:43 crc kubenswrapper[4718]: I0123 18:01:43.075289 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bca04db1-8e77-405e-b8ef-656cf882136c/rabbitmq/0.log" Jan 23 18:01:43 crc kubenswrapper[4718]: I0123 18:01:43.099434 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6854d6fc-92af-4083-a2e2-2f41dd9d2a73/setup-container/0.log" Jan 23 18:01:43 crc kubenswrapper[4718]: I0123 18:01:43.173248 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bca04db1-8e77-405e-b8ef-656cf882136c/setup-container/0.log" Jan 23 18:01:43 crc kubenswrapper[4718]: I0123 18:01:43.289520 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6854d6fc-92af-4083-a2e2-2f41dd9d2a73/setup-container/0.log" Jan 23 18:01:43 crc kubenswrapper[4718]: I0123 18:01:43.401818 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_6e86bb18-ca80-49f5-9de6-46737ff29374/setup-container/0.log" Jan 23 18:01:43 crc kubenswrapper[4718]: I0123 18:01:43.415355 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6854d6fc-92af-4083-a2e2-2f41dd9d2a73/rabbitmq/0.log" Jan 23 18:01:43 crc kubenswrapper[4718]: I0123 18:01:43.645956 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_6e86bb18-ca80-49f5-9de6-46737ff29374/setup-container/0.log" Jan 23 18:01:43 crc kubenswrapper[4718]: I0123 18:01:43.722065 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_d346ed1b-38d4-4c87-82f6-78ec3880c670/setup-container/0.log" Jan 23 18:01:43 crc kubenswrapper[4718]: I0123 18:01:43.734165 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_6e86bb18-ca80-49f5-9de6-46737ff29374/rabbitmq/0.log" Jan 23 18:01:44 crc kubenswrapper[4718]: I0123 18:01:44.033111 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_d346ed1b-38d4-4c87-82f6-78ec3880c670/setup-container/0.log" Jan 23 18:01:44 crc kubenswrapper[4718]: I0123 18:01:44.070033 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh_ecb3058c-dfcb-4950-8c2a-3dba0200135f/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:01:44 crc kubenswrapper[4718]: I0123 18:01:44.138743 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_d346ed1b-38d4-4c87-82f6-78ec3880c670/rabbitmq/0.log" Jan 23 18:01:44 crc kubenswrapper[4718]: I0123 18:01:44.283007 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xcm92_d95287f3-d510-4991-bde5-94259e7c64d4/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:01:44 crc kubenswrapper[4718]: I0123 18:01:44.387801 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-n777k_7d9a564a-3bb5-421a-a861-721b16ae1adc/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:01:44 crc kubenswrapper[4718]: I0123 18:01:44.518205 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-cbknc_876f2274-0082-4049-a9a1-e8ed6b517b57/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:01:44 crc kubenswrapper[4718]: I0123 18:01:44.634053 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-46ztt_1c13cc81-99d0-465b-a13d-638a7482f669/ssh-known-hosts-edpm-deployment/0.log" Jan 23 18:01:44 crc kubenswrapper[4718]: I0123 18:01:44.908089 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5b6b78dc95-9ft97_6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1/proxy-server/0.log" Jan 23 18:01:45 crc kubenswrapper[4718]: I0123 18:01:45.026032 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5b6b78dc95-9ft97_6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1/proxy-httpd/0.log" Jan 23 18:01:45 crc kubenswrapper[4718]: I0123 18:01:45.052256 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-v2fl4_9a040b54-6ee7-446b-83f1-b6b5c211ef43/swift-ring-rebalance/0.log" Jan 23 18:01:45 crc kubenswrapper[4718]: I0123 18:01:45.266131 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/account-auditor/0.log" Jan 23 18:01:45 crc kubenswrapper[4718]: I0123 18:01:45.298598 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/account-reaper/0.log" Jan 23 18:01:45 crc kubenswrapper[4718]: I0123 18:01:45.409011 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/account-replicator/0.log" Jan 23 18:01:45 crc kubenswrapper[4718]: I0123 18:01:45.483054 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/container-auditor/0.log" Jan 23 18:01:45 crc kubenswrapper[4718]: I0123 18:01:45.558245 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/account-server/0.log" Jan 23 18:01:45 crc kubenswrapper[4718]: I0123 18:01:45.623488 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/container-replicator/0.log" Jan 23 18:01:45 crc kubenswrapper[4718]: I0123 18:01:45.633600 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/container-server/0.log" Jan 23 18:01:45 crc kubenswrapper[4718]: I0123 18:01:45.720981 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/container-updater/0.log" Jan 23 18:01:45 crc kubenswrapper[4718]: I0123 18:01:45.785187 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/object-auditor/0.log" Jan 23 18:01:45 crc kubenswrapper[4718]: I0123 18:01:45.866615 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/object-expirer/0.log" Jan 23 18:01:45 crc kubenswrapper[4718]: I0123 18:01:45.982488 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/object-replicator/0.log" Jan 23 18:01:45 crc kubenswrapper[4718]: I0123 18:01:45.985177 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/object-server/0.log" Jan 23 18:01:46 crc kubenswrapper[4718]: I0123 18:01:46.057726 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/object-updater/0.log" Jan 23 18:01:46 crc kubenswrapper[4718]: I0123 18:01:46.133328 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/rsync/0.log" Jan 23 18:01:46 crc kubenswrapper[4718]: I0123 18:01:46.240542 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/swift-recon-cron/0.log" Jan 23 18:01:46 crc kubenswrapper[4718]: I0123 18:01:46.397025 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-5gq77_708430f1-d1c7-46ef-9e2c-9077a85c95fb/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:01:46 crc kubenswrapper[4718]: I0123 18:01:46.767800 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm_7775dfb4-42b6-411d-8dc1-efe8daad5960/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:01:46 crc kubenswrapper[4718]: I0123 18:01:46.959681 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_68ee45e3-d5d8-4579-85c7-d46064d4ac6c/test-operator-logs-container/0.log" Jan 23 18:01:47 crc kubenswrapper[4718]: I0123 18:01:47.224475 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-7jmll_25a0278c-a9ab-4c21-af05-e4fed25e299d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:01:47 crc kubenswrapper[4718]: I0123 18:01:47.782860 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_21700319-2dc7-41c4-8377-8ba6ef629cbb/tempest-tests-tempest-tests-runner/0.log" Jan 23 18:01:48 crc kubenswrapper[4718]: I0123 18:01:48.140544 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 18:01:48 crc kubenswrapper[4718]: E0123 18:01:48.140884 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:01:56 crc kubenswrapper[4718]: I0123 18:01:56.169482 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ffd9d6ca-1e5b-4102-8b5b-664ebd967619/memcached/0.log" Jan 23 18:02:02 crc kubenswrapper[4718]: I0123 18:02:02.140942 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 18:02:02 crc kubenswrapper[4718]: I0123 18:02:02.743813 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"2af9a707a11601b4c22694a5acbd4e62919b4bda565f4ac8cf4bce7939b44765"} Jan 23 18:02:17 crc kubenswrapper[4718]: I0123 18:02:17.650415 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-62wgc_858bcd70-b537-4da9-8ca9-27c1724ece99/manager/1.log" Jan 23 18:02:17 crc kubenswrapper[4718]: I0123 18:02:17.800482 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-62wgc_858bcd70-b537-4da9-8ca9-27c1724ece99/manager/0.log" Jan 23 18:02:17 crc kubenswrapper[4718]: I0123 18:02:17.909524 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-9r22c_a6012879-2e20-485d-829f-3a9ec3e5bcb1/manager/1.log" Jan 23 18:02:17 crc kubenswrapper[4718]: I0123 18:02:17.968331 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-9r22c_a6012879-2e20-485d-829f-3a9ec3e5bcb1/manager/0.log" Jan 23 18:02:18 crc kubenswrapper[4718]: I0123 18:02:18.178201 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-zs7zk_8d9099e2-7f4f-42d8-8e76-d2d8347a1514/manager/1.log" Jan 23 18:02:18 crc kubenswrapper[4718]: I0123 18:02:18.225914 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-zs7zk_8d9099e2-7f4f-42d8-8e76-d2d8347a1514/manager/0.log" Jan 23 18:02:18 crc kubenswrapper[4718]: I0123 18:02:18.296684 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s_c465f3f4-76f9-4a34-be6d-1bef61e77c8f/util/0.log" Jan 23 18:02:18 crc kubenswrapper[4718]: I0123 18:02:18.559381 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s_c465f3f4-76f9-4a34-be6d-1bef61e77c8f/pull/0.log" Jan 23 18:02:18 crc kubenswrapper[4718]: I0123 18:02:18.567270 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s_c465f3f4-76f9-4a34-be6d-1bef61e77c8f/pull/0.log" Jan 23 18:02:18 crc kubenswrapper[4718]: I0123 18:02:18.579345 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s_c465f3f4-76f9-4a34-be6d-1bef61e77c8f/util/0.log" Jan 23 18:02:18 crc kubenswrapper[4718]: I0123 18:02:18.749719 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s_c465f3f4-76f9-4a34-be6d-1bef61e77c8f/util/0.log" Jan 23 18:02:18 crc kubenswrapper[4718]: I0123 18:02:18.771539 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s_c465f3f4-76f9-4a34-be6d-1bef61e77c8f/extract/0.log" Jan 23 18:02:18 crc kubenswrapper[4718]: I0123 18:02:18.783972 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s_c465f3f4-76f9-4a34-be6d-1bef61e77c8f/pull/0.log" Jan 23 18:02:19 crc kubenswrapper[4718]: I0123 18:02:19.032308 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-jjplg_9e8950bc-8213-40eb-9bb7-2e1a8c66b57b/manager/1.log" Jan 23 18:02:19 crc kubenswrapper[4718]: I0123 18:02:19.079032 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-jjplg_9e8950bc-8213-40eb-9bb7-2e1a8c66b57b/manager/0.log" Jan 23 18:02:19 crc kubenswrapper[4718]: I0123 18:02:19.104148 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-dfwk2_d91ca0c9-05fb-4e8b-9581-f2c3d025c0e2/manager/1.log" Jan 23 18:02:19 crc kubenswrapper[4718]: I0123 18:02:19.310036 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-sr2hw_d869ec7c-ddd9-4e17-9154-a793539a2a00/manager/1.log" Jan 23 18:02:19 crc kubenswrapper[4718]: I0123 18:02:19.400594 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-dfwk2_d91ca0c9-05fb-4e8b-9581-f2c3d025c0e2/manager/0.log" Jan 23 18:02:19 crc kubenswrapper[4718]: I0123 18:02:19.424251 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-sr2hw_d869ec7c-ddd9-4e17-9154-a793539a2a00/manager/0.log" Jan 23 18:02:19 crc kubenswrapper[4718]: I0123 18:02:19.544316 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-4tm4n_06df7a47-9233-4957-936e-27f58aeb0000/manager/1.log" Jan 23 18:02:19 crc kubenswrapper[4718]: I0123 18:02:19.822618 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-t8fsk_16e17ade-97be-48d4-83d4-7ac385174edd/manager/0.log" Jan 23 18:02:19 crc kubenswrapper[4718]: I0123 18:02:19.859225 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-t8fsk_16e17ade-97be-48d4-83d4-7ac385174edd/manager/1.log" Jan 23 18:02:19 crc kubenswrapper[4718]: I0123 18:02:19.910079 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-4tm4n_06df7a47-9233-4957-936e-27f58aeb0000/manager/0.log" Jan 23 18:02:20 crc kubenswrapper[4718]: I0123 18:02:20.049041 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-nwpcs_50178034-67cf-4f8d-89bb-788c8a73a72a/manager/1.log" Jan 23 18:02:20 crc kubenswrapper[4718]: I0123 18:02:20.148939 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-nwpcs_50178034-67cf-4f8d-89bb-788c8a73a72a/manager/0.log" Jan 23 18:02:20 crc kubenswrapper[4718]: I0123 18:02:20.165596 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-jbxnk_32d58a3a-df31-492e-a2c2-2f5ca31c5f90/manager/1.log" Jan 23 18:02:20 crc kubenswrapper[4718]: I0123 18:02:20.285480 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-jbxnk_32d58a3a-df31-492e-a2c2-2f5ca31c5f90/manager/0.log" Jan 23 18:02:20 crc kubenswrapper[4718]: I0123 18:02:20.366280 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l_9a95eff5-116c-4141-bee6-5bda12f21e11/manager/1.log" Jan 23 18:02:20 crc kubenswrapper[4718]: I0123 18:02:20.407608 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l_9a95eff5-116c-4141-bee6-5bda12f21e11/manager/0.log" Jan 23 18:02:20 crc kubenswrapper[4718]: I0123 18:02:20.579760 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-sr4hx_8e29e3d6-21d7-4a1a-832e-f831d884fd00/manager/1.log" Jan 23 18:02:20 crc kubenswrapper[4718]: I0123 18:02:20.635149 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-sr4hx_8e29e3d6-21d7-4a1a-832e-f831d884fd00/manager/0.log" Jan 23 18:02:20 crc kubenswrapper[4718]: I0123 18:02:20.876609 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-2m5hx_ae7c1f40-90dd-441b-9dc5-608e1a503f4c/manager/1.log" Jan 23 18:02:20 crc kubenswrapper[4718]: I0123 18:02:20.953818 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-2m5hx_ae7c1f40-90dd-441b-9dc5-608e1a503f4c/manager/0.log" Jan 23 18:02:20 crc kubenswrapper[4718]: I0123 18:02:20.984859 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-kn2t8_2062a379-6201-4835-8974-24befcfbf8e0/manager/1.log" Jan 23 18:02:21 crc kubenswrapper[4718]: I0123 18:02:21.138740 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-kn2t8_2062a379-6201-4835-8974-24befcfbf8e0/manager/0.log" Jan 23 18:02:21 crc kubenswrapper[4718]: I0123 18:02:21.185133 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854bhf98_18395392-bb8d-49be-9b49-950d6f32b9f6/manager/1.log" Jan 23 18:02:21 crc kubenswrapper[4718]: I0123 18:02:21.241056 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854bhf98_18395392-bb8d-49be-9b49-950d6f32b9f6/manager/0.log" Jan 23 18:02:21 crc kubenswrapper[4718]: I0123 18:02:21.451601 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7549f75f-929gl_0c42f381-34a5-4913-90b0-0bbc4e0810fd/operator/1.log" Jan 23 18:02:21 crc kubenswrapper[4718]: I0123 18:02:21.563611 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7549f75f-929gl_0c42f381-34a5-4913-90b0-0bbc4e0810fd/operator/0.log" Jan 23 18:02:21 crc kubenswrapper[4718]: I0123 18:02:21.629931 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-74c6db8f6f-rkhth_369053b2-11b0-4e19-a77d-3ea9cf595039/manager/1.log" Jan 23 18:02:21 crc kubenswrapper[4718]: I0123 18:02:21.846622 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-zvx6t_f519ad69-0e68-44c6-9805-40fb66819268/registry-server/0.log" Jan 23 18:02:22 crc kubenswrapper[4718]: I0123 18:02:22.081727 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-znjjw_0fe9ca7e-5763-4cba-afc1-94065f21f33e/manager/1.log" Jan 23 18:02:22 crc kubenswrapper[4718]: I0123 18:02:22.236891 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-znjjw_0fe9ca7e-5763-4cba-afc1-94065f21f33e/manager/0.log" Jan 23 18:02:22 crc kubenswrapper[4718]: I0123 18:02:22.274610 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-9kl82_49cc2143-a384-436e-8eef-4d7474918177/manager/1.log" Jan 23 18:02:22 crc kubenswrapper[4718]: I0123 18:02:22.387405 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-9kl82_49cc2143-a384-436e-8eef-4d7474918177/manager/0.log" Jan 23 18:02:22 crc kubenswrapper[4718]: I0123 18:02:22.482779 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-64ktf_235aadec-9416-469c-8455-64dd1bc82a08/operator/1.log" Jan 23 18:02:22 crc kubenswrapper[4718]: I0123 18:02:22.837753 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-64ktf_235aadec-9416-469c-8455-64dd1bc82a08/operator/0.log" Jan 23 18:02:23 crc kubenswrapper[4718]: I0123 18:02:23.013355 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-87q6t_7cd4d741-2a88-466f-a644-a1c6c62e521b/manager/1.log" Jan 23 18:02:23 crc kubenswrapper[4718]: I0123 18:02:23.054026 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-87q6t_7cd4d741-2a88-466f-a644-a1c6c62e521b/manager/0.log" Jan 23 18:02:23 crc kubenswrapper[4718]: I0123 18:02:23.055471 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-74c6db8f6f-rkhth_369053b2-11b0-4e19-a77d-3ea9cf595039/manager/0.log" Jan 23 18:02:23 crc kubenswrapper[4718]: I0123 18:02:23.201352 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7c7754d696-xthck_f2fe0ff3-bfa2-4cc4-b85c-8bc89ca73078/manager/1.log" Jan 23 18:02:23 crc kubenswrapper[4718]: I0123 18:02:23.310379 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-9vg4k_3cfce3f5-1f59-43ae-aa99-2483cfb33806/manager/1.log" Jan 23 18:02:23 crc kubenswrapper[4718]: I0123 18:02:23.453072 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-9vg4k_3cfce3f5-1f59-43ae-aa99-2483cfb33806/manager/0.log" Jan 23 18:02:23 crc kubenswrapper[4718]: I0123 18:02:23.540226 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7c7754d696-xthck_f2fe0ff3-bfa2-4cc4-b85c-8bc89ca73078/manager/0.log" Jan 23 18:02:23 crc kubenswrapper[4718]: I0123 18:02:23.563835 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-5lxl4_addb55c8-8565-42c2-84d2-7ee7e8693a3a/manager/1.log" Jan 23 18:02:23 crc kubenswrapper[4718]: I0123 18:02:23.670115 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-5lxl4_addb55c8-8565-42c2-84d2-7ee7e8693a3a/manager/0.log" Jan 23 18:02:43 crc kubenswrapper[4718]: I0123 18:02:43.064101 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-69wt2_7a72377e-a621-4ebb-b31a-7f405b218eb6/control-plane-machine-set-operator/0.log" Jan 23 18:02:43 crc kubenswrapper[4718]: I0123 18:02:43.272511 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9p5vn_8ed9dcbf-5502-4797-9b65-ff900aa065d8/kube-rbac-proxy/0.log" Jan 23 18:02:43 crc kubenswrapper[4718]: I0123 18:02:43.282196 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9p5vn_8ed9dcbf-5502-4797-9b65-ff900aa065d8/machine-api-operator/0.log" Jan 23 18:02:55 crc kubenswrapper[4718]: I0123 18:02:55.963345 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-8jlw4_1e5ee60b-7363-4a74-b69d-1f4f474166e0/cert-manager-controller/1.log" Jan 23 18:02:56 crc kubenswrapper[4718]: I0123 18:02:56.084516 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-8jlw4_1e5ee60b-7363-4a74-b69d-1f4f474166e0/cert-manager-controller/0.log" Jan 23 18:02:56 crc kubenswrapper[4718]: I0123 18:02:56.216739 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-sntwx_57379fa4-b935-4095-a6c1-9e83709c5906/cert-manager-cainjector/1.log" Jan 23 18:02:56 crc kubenswrapper[4718]: I0123 18:02:56.363099 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-sntwx_57379fa4-b935-4095-a6c1-9e83709c5906/cert-manager-cainjector/0.log" Jan 23 18:02:56 crc kubenswrapper[4718]: I0123 18:02:56.420280 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-br6fl_f99e5457-16fb-453f-909c-a8364ffc0372/cert-manager-webhook/0.log" Jan 23 18:03:06 crc kubenswrapper[4718]: I0123 18:03:06.457058 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lhzf5"] Jan 23 18:03:06 crc kubenswrapper[4718]: E0123 18:03:06.460287 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16bee66-a1b2-451c-81b8-d33bba2aae01" containerName="keystone-cron" Jan 23 18:03:06 crc kubenswrapper[4718]: I0123 18:03:06.460311 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16bee66-a1b2-451c-81b8-d33bba2aae01" containerName="keystone-cron" Jan 23 18:03:06 crc kubenswrapper[4718]: E0123 18:03:06.460346 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4505ef81-9301-43a0-9b4e-12436bd34258" containerName="container-00" Jan 23 18:03:06 crc kubenswrapper[4718]: I0123 18:03:06.460353 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4505ef81-9301-43a0-9b4e-12436bd34258" containerName="container-00" Jan 23 18:03:06 crc kubenswrapper[4718]: I0123 18:03:06.460660 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4505ef81-9301-43a0-9b4e-12436bd34258" containerName="container-00" Jan 23 18:03:06 crc kubenswrapper[4718]: I0123 18:03:06.460684 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16bee66-a1b2-451c-81b8-d33bba2aae01" containerName="keystone-cron" Jan 23 18:03:06 crc kubenswrapper[4718]: I0123 18:03:06.462738 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhzf5" Jan 23 18:03:06 crc kubenswrapper[4718]: I0123 18:03:06.477584 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhzf5"] Jan 23 18:03:06 crc kubenswrapper[4718]: I0123 18:03:06.521134 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-492z7\" (UniqueName: \"kubernetes.io/projected/5ac1a6bf-8958-43f3-98e2-06222f32d140-kube-api-access-492z7\") pod \"redhat-operators-lhzf5\" (UID: \"5ac1a6bf-8958-43f3-98e2-06222f32d140\") " pod="openshift-marketplace/redhat-operators-lhzf5" Jan 23 18:03:06 crc kubenswrapper[4718]: I0123 18:03:06.521523 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ac1a6bf-8958-43f3-98e2-06222f32d140-catalog-content\") pod \"redhat-operators-lhzf5\" (UID: \"5ac1a6bf-8958-43f3-98e2-06222f32d140\") " pod="openshift-marketplace/redhat-operators-lhzf5" Jan 23 18:03:06 crc kubenswrapper[4718]: I0123 18:03:06.521660 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ac1a6bf-8958-43f3-98e2-06222f32d140-utilities\") pod \"redhat-operators-lhzf5\" (UID: \"5ac1a6bf-8958-43f3-98e2-06222f32d140\") " pod="openshift-marketplace/redhat-operators-lhzf5" Jan 23 18:03:06 crc kubenswrapper[4718]: I0123 18:03:06.624489 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-492z7\" (UniqueName: \"kubernetes.io/projected/5ac1a6bf-8958-43f3-98e2-06222f32d140-kube-api-access-492z7\") pod \"redhat-operators-lhzf5\" (UID: \"5ac1a6bf-8958-43f3-98e2-06222f32d140\") " pod="openshift-marketplace/redhat-operators-lhzf5" Jan 23 18:03:06 crc kubenswrapper[4718]: I0123 18:03:06.624675 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ac1a6bf-8958-43f3-98e2-06222f32d140-catalog-content\") pod \"redhat-operators-lhzf5\" (UID: \"5ac1a6bf-8958-43f3-98e2-06222f32d140\") " pod="openshift-marketplace/redhat-operators-lhzf5" Jan 23 18:03:06 crc kubenswrapper[4718]: I0123 18:03:06.624729 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ac1a6bf-8958-43f3-98e2-06222f32d140-utilities\") pod \"redhat-operators-lhzf5\" (UID: \"5ac1a6bf-8958-43f3-98e2-06222f32d140\") " pod="openshift-marketplace/redhat-operators-lhzf5" Jan 23 18:03:06 crc kubenswrapper[4718]: I0123 18:03:06.625402 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ac1a6bf-8958-43f3-98e2-06222f32d140-catalog-content\") pod \"redhat-operators-lhzf5\" (UID: \"5ac1a6bf-8958-43f3-98e2-06222f32d140\") " pod="openshift-marketplace/redhat-operators-lhzf5" Jan 23 18:03:06 crc kubenswrapper[4718]: I0123 18:03:06.625428 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ac1a6bf-8958-43f3-98e2-06222f32d140-utilities\") pod \"redhat-operators-lhzf5\" (UID: \"5ac1a6bf-8958-43f3-98e2-06222f32d140\") " pod="openshift-marketplace/redhat-operators-lhzf5" Jan 23 18:03:06 crc kubenswrapper[4718]: I0123 18:03:06.646820 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-492z7\" (UniqueName: \"kubernetes.io/projected/5ac1a6bf-8958-43f3-98e2-06222f32d140-kube-api-access-492z7\") pod \"redhat-operators-lhzf5\" (UID: \"5ac1a6bf-8958-43f3-98e2-06222f32d140\") " pod="openshift-marketplace/redhat-operators-lhzf5" Jan 23 18:03:06 crc kubenswrapper[4718]: I0123 18:03:06.785346 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhzf5" Jan 23 18:03:07 crc kubenswrapper[4718]: I0123 18:03:07.315798 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhzf5"] Jan 23 18:03:07 crc kubenswrapper[4718]: I0123 18:03:07.517310 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhzf5" event={"ID":"5ac1a6bf-8958-43f3-98e2-06222f32d140","Type":"ContainerStarted","Data":"940d31ab4f1c8768e4cca977f0fbacfe702cedaf04af3aa12983c63433702a46"} Jan 23 18:03:08 crc kubenswrapper[4718]: I0123 18:03:08.529414 4718 generic.go:334] "Generic (PLEG): container finished" podID="5ac1a6bf-8958-43f3-98e2-06222f32d140" containerID="73f9c0931ed088be56dc60ec4ba16830403b93b06b8cf0cff4e6796ab1f255de" exitCode=0 Jan 23 18:03:08 crc kubenswrapper[4718]: I0123 18:03:08.529529 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhzf5" event={"ID":"5ac1a6bf-8958-43f3-98e2-06222f32d140","Type":"ContainerDied","Data":"73f9c0931ed088be56dc60ec4ba16830403b93b06b8cf0cff4e6796ab1f255de"} Jan 23 18:03:08 crc kubenswrapper[4718]: I0123 18:03:08.830361 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-2h982_4ff516ae-ef38-4eb8-9721-b5e809fa1a53/nmstate-console-plugin/0.log" Jan 23 18:03:09 crc kubenswrapper[4718]: I0123 18:03:09.111088 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hbmxh_c66f413f-8a00-4526-b93f-4d739aec140c/nmstate-handler/0.log" Jan 23 18:03:09 crc kubenswrapper[4718]: I0123 18:03:09.246701 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-4r42t_c9ada4d9-34eb-43fb-a0ba-09b879eab797/kube-rbac-proxy/0.log" Jan 23 18:03:09 crc kubenswrapper[4718]: I0123 18:03:09.346107 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-4r42t_c9ada4d9-34eb-43fb-a0ba-09b879eab797/nmstate-metrics/0.log" Jan 23 18:03:09 crc kubenswrapper[4718]: I0123 18:03:09.448706 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-q2n5g_1ae3f970-005a-47f5-9539-ba299ac76301/nmstate-operator/0.log" Jan 23 18:03:09 crc kubenswrapper[4718]: I0123 18:03:09.664891 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-hqbgx_9d41c1ee-b304-42c0-a2e7-2fe83315a430/nmstate-webhook/0.log" Jan 23 18:03:10 crc kubenswrapper[4718]: I0123 18:03:10.553224 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhzf5" event={"ID":"5ac1a6bf-8958-43f3-98e2-06222f32d140","Type":"ContainerStarted","Data":"9aa929b7a94945ad12a6f439965ee16fba49ac0335faff61ec17c5755a5cec51"} Jan 23 18:03:14 crc kubenswrapper[4718]: I0123 18:03:14.597955 4718 generic.go:334] "Generic (PLEG): container finished" podID="5ac1a6bf-8958-43f3-98e2-06222f32d140" containerID="9aa929b7a94945ad12a6f439965ee16fba49ac0335faff61ec17c5755a5cec51" exitCode=0 Jan 23 18:03:14 crc kubenswrapper[4718]: I0123 18:03:14.598035 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhzf5" event={"ID":"5ac1a6bf-8958-43f3-98e2-06222f32d140","Type":"ContainerDied","Data":"9aa929b7a94945ad12a6f439965ee16fba49ac0335faff61ec17c5755a5cec51"} Jan 23 18:03:15 crc kubenswrapper[4718]: I0123 18:03:15.612781 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhzf5" event={"ID":"5ac1a6bf-8958-43f3-98e2-06222f32d140","Type":"ContainerStarted","Data":"9b83557e529fae88edb1486593d33e9590d82e5bc9bf0e65b042535c668ad507"} Jan 23 18:03:15 crc kubenswrapper[4718]: I0123 18:03:15.633204 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lhzf5" podStartSLOduration=3.037812435 podStartE2EDuration="9.633185483s" podCreationTimestamp="2026-01-23 18:03:06 +0000 UTC" firstStartedPulling="2026-01-23 18:03:08.532104363 +0000 UTC m=+6389.679346354" lastFinishedPulling="2026-01-23 18:03:15.127477411 +0000 UTC m=+6396.274719402" observedRunningTime="2026-01-23 18:03:15.628558647 +0000 UTC m=+6396.775800658" watchObservedRunningTime="2026-01-23 18:03:15.633185483 +0000 UTC m=+6396.780427474" Jan 23 18:03:16 crc kubenswrapper[4718]: I0123 18:03:16.785953 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lhzf5" Jan 23 18:03:16 crc kubenswrapper[4718]: I0123 18:03:16.786319 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lhzf5" Jan 23 18:03:17 crc kubenswrapper[4718]: I0123 18:03:17.854911 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lhzf5" podUID="5ac1a6bf-8958-43f3-98e2-06222f32d140" containerName="registry-server" probeResult="failure" output=< Jan 23 18:03:17 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Jan 23 18:03:17 crc kubenswrapper[4718]: > Jan 23 18:03:22 crc kubenswrapper[4718]: I0123 18:03:22.956858 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-54c9dfbc84-hsbh6_1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3/kube-rbac-proxy/0.log" Jan 23 18:03:23 crc kubenswrapper[4718]: I0123 18:03:23.027754 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-54c9dfbc84-hsbh6_1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3/manager/1.log" Jan 23 18:03:23 crc kubenswrapper[4718]: I0123 18:03:23.246658 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-54c9dfbc84-hsbh6_1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3/manager/0.log" Jan 23 18:03:27 crc kubenswrapper[4718]: I0123 18:03:27.868505 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lhzf5" podUID="5ac1a6bf-8958-43f3-98e2-06222f32d140" containerName="registry-server" probeResult="failure" output=< Jan 23 18:03:27 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Jan 23 18:03:27 crc kubenswrapper[4718]: > Jan 23 18:03:35 crc kubenswrapper[4718]: I0123 18:03:35.713146 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-zp26h_cbd98c6e-aa7f-4040-86a5-f3ef246c3a52/prometheus-operator/0.log" Jan 23 18:03:35 crc kubenswrapper[4718]: I0123 18:03:35.911960 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_c539786a-23c4-4f13-a3d7-d2166df63aed/prometheus-operator-admission-webhook/0.log" Jan 23 18:03:35 crc kubenswrapper[4718]: I0123 18:03:35.939269 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_38c76550-362b-4f9e-b1fa-58de8a6356a9/prometheus-operator-admission-webhook/0.log" Jan 23 18:03:36 crc kubenswrapper[4718]: I0123 18:03:36.097436 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-5qrhk_d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0/operator/0.log" Jan 23 18:03:36 crc kubenswrapper[4718]: I0123 18:03:36.178898 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-js85h_206601f2-166b-4dcf-9f9b-77a64e3f6c5b/observability-ui-dashboards/0.log" Jan 23 18:03:36 crc kubenswrapper[4718]: I0123 18:03:36.276999 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-b9lkr_8a751218-1b91-4c7f-be34-ea4036ca440f/perses-operator/0.log" Jan 23 18:03:36 crc kubenswrapper[4718]: I0123 18:03:36.858221 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lhzf5" Jan 23 18:03:36 crc kubenswrapper[4718]: I0123 18:03:36.917980 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lhzf5" Jan 23 18:03:37 crc kubenswrapper[4718]: I0123 18:03:37.663020 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lhzf5"] Jan 23 18:03:38 crc kubenswrapper[4718]: I0123 18:03:38.907773 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lhzf5" podUID="5ac1a6bf-8958-43f3-98e2-06222f32d140" containerName="registry-server" containerID="cri-o://9b83557e529fae88edb1486593d33e9590d82e5bc9bf0e65b042535c668ad507" gracePeriod=2 Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.562260 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhzf5" Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.632546 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ac1a6bf-8958-43f3-98e2-06222f32d140-catalog-content\") pod \"5ac1a6bf-8958-43f3-98e2-06222f32d140\" (UID: \"5ac1a6bf-8958-43f3-98e2-06222f32d140\") " Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.632823 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ac1a6bf-8958-43f3-98e2-06222f32d140-utilities\") pod \"5ac1a6bf-8958-43f3-98e2-06222f32d140\" (UID: \"5ac1a6bf-8958-43f3-98e2-06222f32d140\") " Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.632992 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-492z7\" (UniqueName: \"kubernetes.io/projected/5ac1a6bf-8958-43f3-98e2-06222f32d140-kube-api-access-492z7\") pod \"5ac1a6bf-8958-43f3-98e2-06222f32d140\" (UID: \"5ac1a6bf-8958-43f3-98e2-06222f32d140\") " Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.635427 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ac1a6bf-8958-43f3-98e2-06222f32d140-utilities" (OuterVolumeSpecName: "utilities") pod "5ac1a6bf-8958-43f3-98e2-06222f32d140" (UID: "5ac1a6bf-8958-43f3-98e2-06222f32d140"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.637692 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ac1a6bf-8958-43f3-98e2-06222f32d140-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.658537 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ac1a6bf-8958-43f3-98e2-06222f32d140-kube-api-access-492z7" (OuterVolumeSpecName: "kube-api-access-492z7") pod "5ac1a6bf-8958-43f3-98e2-06222f32d140" (UID: "5ac1a6bf-8958-43f3-98e2-06222f32d140"). InnerVolumeSpecName "kube-api-access-492z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.740562 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-492z7\" (UniqueName: \"kubernetes.io/projected/5ac1a6bf-8958-43f3-98e2-06222f32d140-kube-api-access-492z7\") on node \"crc\" DevicePath \"\"" Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.781604 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ac1a6bf-8958-43f3-98e2-06222f32d140-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ac1a6bf-8958-43f3-98e2-06222f32d140" (UID: "5ac1a6bf-8958-43f3-98e2-06222f32d140"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.842458 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ac1a6bf-8958-43f3-98e2-06222f32d140-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.919506 4718 generic.go:334] "Generic (PLEG): container finished" podID="5ac1a6bf-8958-43f3-98e2-06222f32d140" containerID="9b83557e529fae88edb1486593d33e9590d82e5bc9bf0e65b042535c668ad507" exitCode=0 Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.919557 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhzf5" event={"ID":"5ac1a6bf-8958-43f3-98e2-06222f32d140","Type":"ContainerDied","Data":"9b83557e529fae88edb1486593d33e9590d82e5bc9bf0e65b042535c668ad507"} Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.919594 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhzf5" event={"ID":"5ac1a6bf-8958-43f3-98e2-06222f32d140","Type":"ContainerDied","Data":"940d31ab4f1c8768e4cca977f0fbacfe702cedaf04af3aa12983c63433702a46"} Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.919612 4718 scope.go:117] "RemoveContainer" containerID="9b83557e529fae88edb1486593d33e9590d82e5bc9bf0e65b042535c668ad507" Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.919641 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhzf5" Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.941808 4718 scope.go:117] "RemoveContainer" containerID="9aa929b7a94945ad12a6f439965ee16fba49ac0335faff61ec17c5755a5cec51" Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.956418 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lhzf5"] Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.974352 4718 scope.go:117] "RemoveContainer" containerID="73f9c0931ed088be56dc60ec4ba16830403b93b06b8cf0cff4e6796ab1f255de" Jan 23 18:03:39 crc kubenswrapper[4718]: I0123 18:03:39.974366 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lhzf5"] Jan 23 18:03:40 crc kubenswrapper[4718]: I0123 18:03:40.026492 4718 scope.go:117] "RemoveContainer" containerID="9b83557e529fae88edb1486593d33e9590d82e5bc9bf0e65b042535c668ad507" Jan 23 18:03:40 crc kubenswrapper[4718]: E0123 18:03:40.026966 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b83557e529fae88edb1486593d33e9590d82e5bc9bf0e65b042535c668ad507\": container with ID starting with 9b83557e529fae88edb1486593d33e9590d82e5bc9bf0e65b042535c668ad507 not found: ID does not exist" containerID="9b83557e529fae88edb1486593d33e9590d82e5bc9bf0e65b042535c668ad507" Jan 23 18:03:40 crc kubenswrapper[4718]: I0123 18:03:40.027008 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b83557e529fae88edb1486593d33e9590d82e5bc9bf0e65b042535c668ad507"} err="failed to get container status \"9b83557e529fae88edb1486593d33e9590d82e5bc9bf0e65b042535c668ad507\": rpc error: code = NotFound desc = could not find container \"9b83557e529fae88edb1486593d33e9590d82e5bc9bf0e65b042535c668ad507\": container with ID starting with 9b83557e529fae88edb1486593d33e9590d82e5bc9bf0e65b042535c668ad507 not found: ID does not exist" Jan 23 18:03:40 crc kubenswrapper[4718]: I0123 18:03:40.027034 4718 scope.go:117] "RemoveContainer" containerID="9aa929b7a94945ad12a6f439965ee16fba49ac0335faff61ec17c5755a5cec51" Jan 23 18:03:40 crc kubenswrapper[4718]: E0123 18:03:40.027427 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aa929b7a94945ad12a6f439965ee16fba49ac0335faff61ec17c5755a5cec51\": container with ID starting with 9aa929b7a94945ad12a6f439965ee16fba49ac0335faff61ec17c5755a5cec51 not found: ID does not exist" containerID="9aa929b7a94945ad12a6f439965ee16fba49ac0335faff61ec17c5755a5cec51" Jan 23 18:03:40 crc kubenswrapper[4718]: I0123 18:03:40.027456 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa929b7a94945ad12a6f439965ee16fba49ac0335faff61ec17c5755a5cec51"} err="failed to get container status \"9aa929b7a94945ad12a6f439965ee16fba49ac0335faff61ec17c5755a5cec51\": rpc error: code = NotFound desc = could not find container \"9aa929b7a94945ad12a6f439965ee16fba49ac0335faff61ec17c5755a5cec51\": container with ID starting with 9aa929b7a94945ad12a6f439965ee16fba49ac0335faff61ec17c5755a5cec51 not found: ID does not exist" Jan 23 18:03:40 crc kubenswrapper[4718]: I0123 18:03:40.027476 4718 scope.go:117] "RemoveContainer" containerID="73f9c0931ed088be56dc60ec4ba16830403b93b06b8cf0cff4e6796ab1f255de" Jan 23 18:03:40 crc kubenswrapper[4718]: E0123 18:03:40.027918 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73f9c0931ed088be56dc60ec4ba16830403b93b06b8cf0cff4e6796ab1f255de\": container with ID starting with 73f9c0931ed088be56dc60ec4ba16830403b93b06b8cf0cff4e6796ab1f255de not found: ID does not exist" containerID="73f9c0931ed088be56dc60ec4ba16830403b93b06b8cf0cff4e6796ab1f255de" Jan 23 18:03:40 crc kubenswrapper[4718]: I0123 18:03:40.027949 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73f9c0931ed088be56dc60ec4ba16830403b93b06b8cf0cff4e6796ab1f255de"} err="failed to get container status \"73f9c0931ed088be56dc60ec4ba16830403b93b06b8cf0cff4e6796ab1f255de\": rpc error: code = NotFound desc = could not find container \"73f9c0931ed088be56dc60ec4ba16830403b93b06b8cf0cff4e6796ab1f255de\": container with ID starting with 73f9c0931ed088be56dc60ec4ba16830403b93b06b8cf0cff4e6796ab1f255de not found: ID does not exist" Jan 23 18:03:41 crc kubenswrapper[4718]: I0123 18:03:41.153970 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ac1a6bf-8958-43f3-98e2-06222f32d140" path="/var/lib/kubelet/pods/5ac1a6bf-8958-43f3-98e2-06222f32d140/volumes" Jan 23 18:03:50 crc kubenswrapper[4718]: I0123 18:03:50.628368 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-6gvnk_cfab1ac5-2db7-41ea-8dba-31bdc0e1b22a/cluster-logging-operator/0.log" Jan 23 18:03:50 crc kubenswrapper[4718]: I0123 18:03:50.841791 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-b6hvd_e45ebe67-eb65-4bf4-8d7b-f03a7113f22e/collector/0.log" Jan 23 18:03:50 crc kubenswrapper[4718]: I0123 18:03:50.900211 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_c3a642bb-f3f3-4e14-9442-0aa47e1b7b43/loki-compactor/0.log" Jan 23 18:03:51 crc kubenswrapper[4718]: I0123 18:03:51.012037 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-tpqfn_c2309723-2af5-455a-8f21-41e08e80d045/loki-distributor/0.log" Jan 23 18:03:51 crc kubenswrapper[4718]: I0123 18:03:51.064869 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f48ff8847-c72c6_46bec7ac-b95d-425d-ab7a-4a669278b158/gateway/0.log" Jan 23 18:03:51 crc kubenswrapper[4718]: I0123 18:03:51.192389 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f48ff8847-c72c6_46bec7ac-b95d-425d-ab7a-4a669278b158/opa/0.log" Jan 23 18:03:51 crc kubenswrapper[4718]: I0123 18:03:51.240725 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f48ff8847-th7vf_5aca942e-fa67-4679-a257-6db5cf93a95a/gateway/0.log" Jan 23 18:03:51 crc kubenswrapper[4718]: I0123 18:03:51.263800 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f48ff8847-th7vf_5aca942e-fa67-4679-a257-6db5cf93a95a/opa/0.log" Jan 23 18:03:51 crc kubenswrapper[4718]: I0123 18:03:51.404752 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_014d7cb2-435f-4a6f-85af-6bc6553d6704/loki-index-gateway/0.log" Jan 23 18:03:51 crc kubenswrapper[4718]: I0123 18:03:51.505653 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_cdb096c7-cef2-48a8-9f83-4752311a02be/loki-ingester/0.log" Jan 23 18:03:51 crc kubenswrapper[4718]: I0123 18:03:51.662953 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-2vjw9_03861098-f572-4ace-ab3b-7fddb749da7d/loki-querier/0.log" Jan 23 18:03:51 crc kubenswrapper[4718]: I0123 18:03:51.696392 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-5cw9h_96cdf9bc-4893-4918-94e9-a23212e8ec5c/loki-query-frontend/0.log" Jan 23 18:04:04 crc kubenswrapper[4718]: I0123 18:04:04.993710 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pd8td_9e7b3c6e-a339-4412-aecf-1091bfc315a5/kube-rbac-proxy/0.log" Jan 23 18:04:05 crc kubenswrapper[4718]: I0123 18:04:05.196483 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-frr-files/0.log" Jan 23 18:04:05 crc kubenswrapper[4718]: I0123 18:04:05.228246 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pd8td_9e7b3c6e-a339-4412-aecf-1091bfc315a5/controller/0.log" Jan 23 18:04:05 crc kubenswrapper[4718]: I0123 18:04:05.455279 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-metrics/0.log" Jan 23 18:04:05 crc kubenswrapper[4718]: I0123 18:04:05.466889 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-reloader/0.log" Jan 23 18:04:05 crc kubenswrapper[4718]: I0123 18:04:05.481265 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-reloader/0.log" Jan 23 18:04:05 crc kubenswrapper[4718]: I0123 18:04:05.481340 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-frr-files/0.log" Jan 23 18:04:05 crc kubenswrapper[4718]: I0123 18:04:05.663604 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-reloader/0.log" Jan 23 18:04:05 crc kubenswrapper[4718]: I0123 18:04:05.672048 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-frr-files/0.log" Jan 23 18:04:05 crc kubenswrapper[4718]: I0123 18:04:05.707558 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-metrics/0.log" Jan 23 18:04:05 crc kubenswrapper[4718]: I0123 18:04:05.720809 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-metrics/0.log" Jan 23 18:04:05 crc kubenswrapper[4718]: I0123 18:04:05.853824 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-frr-files/0.log" Jan 23 18:04:05 crc kubenswrapper[4718]: I0123 18:04:05.886949 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-metrics/0.log" Jan 23 18:04:05 crc kubenswrapper[4718]: I0123 18:04:05.900068 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-reloader/0.log" Jan 23 18:04:05 crc kubenswrapper[4718]: I0123 18:04:05.915810 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/controller/0.log" Jan 23 18:04:06 crc kubenswrapper[4718]: I0123 18:04:06.057103 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/frr-metrics/0.log" Jan 23 18:04:06 crc kubenswrapper[4718]: I0123 18:04:06.094562 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/kube-rbac-proxy/0.log" Jan 23 18:04:06 crc kubenswrapper[4718]: I0123 18:04:06.128736 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/kube-rbac-proxy-frr/0.log" Jan 23 18:04:06 crc kubenswrapper[4718]: I0123 18:04:06.330604 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/reloader/0.log" Jan 23 18:04:06 crc kubenswrapper[4718]: I0123 18:04:06.359186 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-2kdmh_0a04951a-b116-4c6f-ad48-4742051ef181/frr-k8s-webhook-server/0.log" Jan 23 18:04:06 crc kubenswrapper[4718]: I0123 18:04:06.589886 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-85fcf7954b-5fmcn_7eb6e283-9137-4b68-88b1-9a9dccb9fcd5/manager/0.log" Jan 23 18:04:06 crc kubenswrapper[4718]: I0123 18:04:06.660033 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-85fcf7954b-5fmcn_7eb6e283-9137-4b68-88b1-9a9dccb9fcd5/manager/1.log" Jan 23 18:04:06 crc kubenswrapper[4718]: I0123 18:04:06.845363 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6865b95b75-sk5rr_b6a8f377-b8e9-4241-a0fa-b40031d27cd7/webhook-server/0.log" Jan 23 18:04:07 crc kubenswrapper[4718]: I0123 18:04:07.007909 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rfc5b_b47f2ba5-694f-4929-9932-a844b35ba149/kube-rbac-proxy/0.log" Jan 23 18:04:07 crc kubenswrapper[4718]: I0123 18:04:07.926260 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rfc5b_b47f2ba5-694f-4929-9932-a844b35ba149/speaker/0.log" Jan 23 18:04:08 crc kubenswrapper[4718]: I0123 18:04:08.346871 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/frr/0.log" Jan 23 18:04:20 crc kubenswrapper[4718]: I0123 18:04:20.226603 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2_78a945c6-373e-4c80-acb5-1dd5a14c2be6/util/0.log" Jan 23 18:04:20 crc kubenswrapper[4718]: I0123 18:04:20.394110 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2_78a945c6-373e-4c80-acb5-1dd5a14c2be6/pull/0.log" Jan 23 18:04:20 crc kubenswrapper[4718]: I0123 18:04:20.432934 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2_78a945c6-373e-4c80-acb5-1dd5a14c2be6/util/0.log" Jan 23 18:04:20 crc kubenswrapper[4718]: I0123 18:04:20.464816 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2_78a945c6-373e-4c80-acb5-1dd5a14c2be6/pull/0.log" Jan 23 18:04:20 crc kubenswrapper[4718]: I0123 18:04:20.587746 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2_78a945c6-373e-4c80-acb5-1dd5a14c2be6/util/0.log" Jan 23 18:04:20 crc kubenswrapper[4718]: I0123 18:04:20.590416 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2_78a945c6-373e-4c80-acb5-1dd5a14c2be6/pull/0.log" Jan 23 18:04:20 crc kubenswrapper[4718]: I0123 18:04:20.611032 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2_78a945c6-373e-4c80-acb5-1dd5a14c2be6/extract/0.log" Jan 23 18:04:20 crc kubenswrapper[4718]: I0123 18:04:20.768705 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s_dd644261-7d52-41e2-935a-56fde296d6b3/util/0.log" Jan 23 18:04:20 crc kubenswrapper[4718]: I0123 18:04:20.954767 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s_dd644261-7d52-41e2-935a-56fde296d6b3/util/0.log" Jan 23 18:04:20 crc kubenswrapper[4718]: I0123 18:04:20.961894 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s_dd644261-7d52-41e2-935a-56fde296d6b3/pull/0.log" Jan 23 18:04:20 crc kubenswrapper[4718]: I0123 18:04:20.979168 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s_dd644261-7d52-41e2-935a-56fde296d6b3/pull/0.log" Jan 23 18:04:21 crc kubenswrapper[4718]: I0123 18:04:21.146031 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s_dd644261-7d52-41e2-935a-56fde296d6b3/pull/0.log" Jan 23 18:04:21 crc kubenswrapper[4718]: I0123 18:04:21.154437 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s_dd644261-7d52-41e2-935a-56fde296d6b3/util/0.log" Jan 23 18:04:21 crc kubenswrapper[4718]: I0123 18:04:21.161919 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s_dd644261-7d52-41e2-935a-56fde296d6b3/extract/0.log" Jan 23 18:04:21 crc kubenswrapper[4718]: I0123 18:04:21.330840 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4_76c75fbc-0e43-4fac-81e9-06bf925c0a1e/util/0.log" Jan 23 18:04:21 crc kubenswrapper[4718]: I0123 18:04:21.538309 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4_76c75fbc-0e43-4fac-81e9-06bf925c0a1e/util/0.log" Jan 23 18:04:21 crc kubenswrapper[4718]: I0123 18:04:21.551587 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4_76c75fbc-0e43-4fac-81e9-06bf925c0a1e/pull/0.log" Jan 23 18:04:21 crc kubenswrapper[4718]: I0123 18:04:21.591906 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4_76c75fbc-0e43-4fac-81e9-06bf925c0a1e/pull/0.log" Jan 23 18:04:21 crc kubenswrapper[4718]: I0123 18:04:21.705938 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4_76c75fbc-0e43-4fac-81e9-06bf925c0a1e/util/0.log" Jan 23 18:04:21 crc kubenswrapper[4718]: I0123 18:04:21.736920 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4_76c75fbc-0e43-4fac-81e9-06bf925c0a1e/pull/0.log" Jan 23 18:04:21 crc kubenswrapper[4718]: I0123 18:04:21.749273 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4_76c75fbc-0e43-4fac-81e9-06bf925c0a1e/extract/0.log" Jan 23 18:04:21 crc kubenswrapper[4718]: I0123 18:04:21.886437 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2_f3a19209-a098-4d8f-8c8c-9b345cb3c185/util/0.log" Jan 23 18:04:22 crc kubenswrapper[4718]: I0123 18:04:22.073121 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2_f3a19209-a098-4d8f-8c8c-9b345cb3c185/util/0.log" Jan 23 18:04:22 crc kubenswrapper[4718]: I0123 18:04:22.122021 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2_f3a19209-a098-4d8f-8c8c-9b345cb3c185/pull/0.log" Jan 23 18:04:22 crc kubenswrapper[4718]: I0123 18:04:22.145551 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2_f3a19209-a098-4d8f-8c8c-9b345cb3c185/pull/0.log" Jan 23 18:04:22 crc kubenswrapper[4718]: I0123 18:04:22.310229 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2_f3a19209-a098-4d8f-8c8c-9b345cb3c185/util/0.log" Jan 23 18:04:22 crc kubenswrapper[4718]: I0123 18:04:22.330072 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2_f3a19209-a098-4d8f-8c8c-9b345cb3c185/pull/0.log" Jan 23 18:04:22 crc kubenswrapper[4718]: I0123 18:04:22.336193 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2_f3a19209-a098-4d8f-8c8c-9b345cb3c185/extract/0.log" Jan 23 18:04:22 crc kubenswrapper[4718]: I0123 18:04:22.525902 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm_6bc79f32-12bc-4d6d-ad7d-ebe468f6e164/util/0.log" Jan 23 18:04:22 crc kubenswrapper[4718]: I0123 18:04:22.655519 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm_6bc79f32-12bc-4d6d-ad7d-ebe468f6e164/util/0.log" Jan 23 18:04:22 crc kubenswrapper[4718]: I0123 18:04:22.690666 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm_6bc79f32-12bc-4d6d-ad7d-ebe468f6e164/pull/0.log" Jan 23 18:04:22 crc kubenswrapper[4718]: I0123 18:04:22.690812 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm_6bc79f32-12bc-4d6d-ad7d-ebe468f6e164/pull/0.log" Jan 23 18:04:22 crc kubenswrapper[4718]: I0123 18:04:22.876294 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm_6bc79f32-12bc-4d6d-ad7d-ebe468f6e164/util/0.log" Jan 23 18:04:22 crc kubenswrapper[4718]: I0123 18:04:22.895991 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm_6bc79f32-12bc-4d6d-ad7d-ebe468f6e164/pull/0.log" Jan 23 18:04:22 crc kubenswrapper[4718]: I0123 18:04:22.916619 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm_6bc79f32-12bc-4d6d-ad7d-ebe468f6e164/extract/0.log" Jan 23 18:04:23 crc kubenswrapper[4718]: I0123 18:04:23.047843 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d2rjh_2bec1314-07f0-4c53-bad8-5eeb60b833a3/extract-utilities/0.log" Jan 23 18:04:23 crc kubenswrapper[4718]: I0123 18:04:23.277474 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d2rjh_2bec1314-07f0-4c53-bad8-5eeb60b833a3/extract-utilities/0.log" Jan 23 18:04:23 crc kubenswrapper[4718]: I0123 18:04:23.277598 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d2rjh_2bec1314-07f0-4c53-bad8-5eeb60b833a3/extract-content/0.log" Jan 23 18:04:23 crc kubenswrapper[4718]: I0123 18:04:23.285192 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d2rjh_2bec1314-07f0-4c53-bad8-5eeb60b833a3/extract-content/0.log" Jan 23 18:04:23 crc kubenswrapper[4718]: I0123 18:04:23.461666 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d2rjh_2bec1314-07f0-4c53-bad8-5eeb60b833a3/extract-content/0.log" Jan 23 18:04:23 crc kubenswrapper[4718]: I0123 18:04:23.469036 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d2rjh_2bec1314-07f0-4c53-bad8-5eeb60b833a3/extract-utilities/0.log" Jan 23 18:04:23 crc kubenswrapper[4718]: I0123 18:04:23.740959 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k8pd8_fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44/extract-utilities/0.log" Jan 23 18:04:23 crc kubenswrapper[4718]: I0123 18:04:23.963274 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k8pd8_fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44/extract-content/0.log" Jan 23 18:04:23 crc kubenswrapper[4718]: I0123 18:04:23.975057 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k8pd8_fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44/extract-utilities/0.log" Jan 23 18:04:24 crc kubenswrapper[4718]: I0123 18:04:24.008093 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k8pd8_fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44/extract-content/0.log" Jan 23 18:04:24 crc kubenswrapper[4718]: I0123 18:04:24.265296 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k8pd8_fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44/extract-content/0.log" Jan 23 18:04:24 crc kubenswrapper[4718]: I0123 18:04:24.271904 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k8pd8_fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44/extract-utilities/0.log" Jan 23 18:04:24 crc kubenswrapper[4718]: I0123 18:04:24.484344 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7bzfg_ad5b2aea-ec41-49cb-ac4b-0497fed12dab/marketplace-operator/1.log" Jan 23 18:04:24 crc kubenswrapper[4718]: I0123 18:04:24.500023 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d2rjh_2bec1314-07f0-4c53-bad8-5eeb60b833a3/registry-server/0.log" Jan 23 18:04:24 crc kubenswrapper[4718]: I0123 18:04:24.535418 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7bzfg_ad5b2aea-ec41-49cb-ac4b-0497fed12dab/marketplace-operator/0.log" Jan 23 18:04:24 crc kubenswrapper[4718]: I0123 18:04:24.726312 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfk5b_c8488e45-93b0-4584-af48-deee2924279f/extract-utilities/0.log" Jan 23 18:04:24 crc kubenswrapper[4718]: I0123 18:04:24.968614 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfk5b_c8488e45-93b0-4584-af48-deee2924279f/extract-content/0.log" Jan 23 18:04:24 crc kubenswrapper[4718]: I0123 18:04:24.995727 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfk5b_c8488e45-93b0-4584-af48-deee2924279f/extract-content/0.log" Jan 23 18:04:25 crc kubenswrapper[4718]: I0123 18:04:25.007099 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfk5b_c8488e45-93b0-4584-af48-deee2924279f/extract-utilities/0.log" Jan 23 18:04:25 crc kubenswrapper[4718]: I0123 18:04:25.033663 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k8pd8_fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44/registry-server/0.log" Jan 23 18:04:25 crc kubenswrapper[4718]: I0123 18:04:25.171690 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfk5b_c8488e45-93b0-4584-af48-deee2924279f/extract-utilities/0.log" Jan 23 18:04:25 crc kubenswrapper[4718]: I0123 18:04:25.185157 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfk5b_c8488e45-93b0-4584-af48-deee2924279f/extract-content/0.log" Jan 23 18:04:25 crc kubenswrapper[4718]: I0123 18:04:25.230851 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vb52_3a97ed77-1e1b-447a-9a88-6d43f803f9d9/extract-utilities/0.log" Jan 23 18:04:25 crc kubenswrapper[4718]: I0123 18:04:25.498430 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfk5b_c8488e45-93b0-4584-af48-deee2924279f/registry-server/0.log" Jan 23 18:04:25 crc kubenswrapper[4718]: I0123 18:04:25.533845 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vb52_3a97ed77-1e1b-447a-9a88-6d43f803f9d9/extract-utilities/0.log" Jan 23 18:04:25 crc kubenswrapper[4718]: I0123 18:04:25.542087 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vb52_3a97ed77-1e1b-447a-9a88-6d43f803f9d9/extract-content/0.log" Jan 23 18:04:25 crc kubenswrapper[4718]: I0123 18:04:25.547639 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vb52_3a97ed77-1e1b-447a-9a88-6d43f803f9d9/extract-content/0.log" Jan 23 18:04:25 crc kubenswrapper[4718]: I0123 18:04:25.703288 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vb52_3a97ed77-1e1b-447a-9a88-6d43f803f9d9/extract-utilities/0.log" Jan 23 18:04:25 crc kubenswrapper[4718]: I0123 18:04:25.709120 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vb52_3a97ed77-1e1b-447a-9a88-6d43f803f9d9/extract-content/0.log" Jan 23 18:04:26 crc kubenswrapper[4718]: I0123 18:04:26.553473 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vb52_3a97ed77-1e1b-447a-9a88-6d43f803f9d9/registry-server/0.log" Jan 23 18:04:28 crc kubenswrapper[4718]: I0123 18:04:28.875754 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:04:28 crc kubenswrapper[4718]: I0123 18:04:28.876172 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:04:37 crc kubenswrapper[4718]: I0123 18:04:37.817719 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_38c76550-362b-4f9e-b1fa-58de8a6356a9/prometheus-operator-admission-webhook/0.log" Jan 23 18:04:37 crc kubenswrapper[4718]: I0123 18:04:37.858405 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-zp26h_cbd98c6e-aa7f-4040-86a5-f3ef246c3a52/prometheus-operator/0.log" Jan 23 18:04:37 crc kubenswrapper[4718]: I0123 18:04:37.870080 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_c539786a-23c4-4f13-a3d7-d2166df63aed/prometheus-operator-admission-webhook/0.log" Jan 23 18:04:38 crc kubenswrapper[4718]: I0123 18:04:38.023065 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-5qrhk_d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0/operator/0.log" Jan 23 18:04:38 crc kubenswrapper[4718]: I0123 18:04:38.037677 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-js85h_206601f2-166b-4dcf-9f9b-77a64e3f6c5b/observability-ui-dashboards/0.log" Jan 23 18:04:38 crc kubenswrapper[4718]: I0123 18:04:38.071744 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-b9lkr_8a751218-1b91-4c7f-be34-ea4036ca440f/perses-operator/0.log" Jan 23 18:04:51 crc kubenswrapper[4718]: I0123 18:04:51.987837 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-54c9dfbc84-hsbh6_1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3/manager/1.log" Jan 23 18:04:52 crc kubenswrapper[4718]: I0123 18:04:52.013806 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-54c9dfbc84-hsbh6_1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3/manager/0.log" Jan 23 18:04:52 crc kubenswrapper[4718]: I0123 18:04:52.053530 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-54c9dfbc84-hsbh6_1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3/kube-rbac-proxy/0.log" Jan 23 18:04:58 crc kubenswrapper[4718]: I0123 18:04:58.876065 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:04:58 crc kubenswrapper[4718]: I0123 18:04:58.876548 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:05:11 crc kubenswrapper[4718]: I0123 18:05:11.451503 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-njqgt"] Jan 23 18:05:11 crc kubenswrapper[4718]: E0123 18:05:11.452547 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac1a6bf-8958-43f3-98e2-06222f32d140" containerName="registry-server" Jan 23 18:05:11 crc kubenswrapper[4718]: I0123 18:05:11.452561 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac1a6bf-8958-43f3-98e2-06222f32d140" containerName="registry-server" Jan 23 18:05:11 crc kubenswrapper[4718]: E0123 18:05:11.452592 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac1a6bf-8958-43f3-98e2-06222f32d140" containerName="extract-utilities" Jan 23 18:05:11 crc kubenswrapper[4718]: I0123 18:05:11.452600 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac1a6bf-8958-43f3-98e2-06222f32d140" containerName="extract-utilities" Jan 23 18:05:11 crc kubenswrapper[4718]: E0123 18:05:11.453216 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac1a6bf-8958-43f3-98e2-06222f32d140" containerName="extract-content" Jan 23 18:05:11 crc kubenswrapper[4718]: I0123 18:05:11.453231 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac1a6bf-8958-43f3-98e2-06222f32d140" containerName="extract-content" Jan 23 18:05:11 crc kubenswrapper[4718]: I0123 18:05:11.453513 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ac1a6bf-8958-43f3-98e2-06222f32d140" containerName="registry-server" Jan 23 18:05:11 crc kubenswrapper[4718]: I0123 18:05:11.481820 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-njqgt"] Jan 23 18:05:11 crc kubenswrapper[4718]: I0123 18:05:11.481929 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-njqgt" Jan 23 18:05:11 crc kubenswrapper[4718]: I0123 18:05:11.531308 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kvz9\" (UniqueName: \"kubernetes.io/projected/41edd2c4-c9c7-4087-b474-a907966763be-kube-api-access-2kvz9\") pod \"community-operators-njqgt\" (UID: \"41edd2c4-c9c7-4087-b474-a907966763be\") " pod="openshift-marketplace/community-operators-njqgt" Jan 23 18:05:11 crc kubenswrapper[4718]: I0123 18:05:11.531515 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41edd2c4-c9c7-4087-b474-a907966763be-utilities\") pod \"community-operators-njqgt\" (UID: \"41edd2c4-c9c7-4087-b474-a907966763be\") " pod="openshift-marketplace/community-operators-njqgt" Jan 23 18:05:11 crc kubenswrapper[4718]: I0123 18:05:11.531605 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41edd2c4-c9c7-4087-b474-a907966763be-catalog-content\") pod \"community-operators-njqgt\" (UID: \"41edd2c4-c9c7-4087-b474-a907966763be\") " pod="openshift-marketplace/community-operators-njqgt" Jan 23 18:05:11 crc kubenswrapper[4718]: I0123 18:05:11.634340 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kvz9\" (UniqueName: \"kubernetes.io/projected/41edd2c4-c9c7-4087-b474-a907966763be-kube-api-access-2kvz9\") pod \"community-operators-njqgt\" (UID: \"41edd2c4-c9c7-4087-b474-a907966763be\") " pod="openshift-marketplace/community-operators-njqgt" Jan 23 18:05:11 crc kubenswrapper[4718]: I0123 18:05:11.634751 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41edd2c4-c9c7-4087-b474-a907966763be-utilities\") pod \"community-operators-njqgt\" (UID: \"41edd2c4-c9c7-4087-b474-a907966763be\") " pod="openshift-marketplace/community-operators-njqgt" Jan 23 18:05:11 crc kubenswrapper[4718]: I0123 18:05:11.634819 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41edd2c4-c9c7-4087-b474-a907966763be-catalog-content\") pod \"community-operators-njqgt\" (UID: \"41edd2c4-c9c7-4087-b474-a907966763be\") " pod="openshift-marketplace/community-operators-njqgt" Jan 23 18:05:11 crc kubenswrapper[4718]: I0123 18:05:11.635988 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41edd2c4-c9c7-4087-b474-a907966763be-catalog-content\") pod \"community-operators-njqgt\" (UID: \"41edd2c4-c9c7-4087-b474-a907966763be\") " pod="openshift-marketplace/community-operators-njqgt" Jan 23 18:05:11 crc kubenswrapper[4718]: I0123 18:05:11.636445 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41edd2c4-c9c7-4087-b474-a907966763be-utilities\") pod \"community-operators-njqgt\" (UID: \"41edd2c4-c9c7-4087-b474-a907966763be\") " pod="openshift-marketplace/community-operators-njqgt" Jan 23 18:05:11 crc kubenswrapper[4718]: I0123 18:05:11.658651 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kvz9\" (UniqueName: \"kubernetes.io/projected/41edd2c4-c9c7-4087-b474-a907966763be-kube-api-access-2kvz9\") pod \"community-operators-njqgt\" (UID: \"41edd2c4-c9c7-4087-b474-a907966763be\") " pod="openshift-marketplace/community-operators-njqgt" Jan 23 18:05:11 crc kubenswrapper[4718]: I0123 18:05:11.812476 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-njqgt" Jan 23 18:05:12 crc kubenswrapper[4718]: I0123 18:05:12.656565 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-njqgt"] Jan 23 18:05:13 crc kubenswrapper[4718]: I0123 18:05:13.150417 4718 generic.go:334] "Generic (PLEG): container finished" podID="41edd2c4-c9c7-4087-b474-a907966763be" containerID="16b2a7c2aea7c31a66b982f25227f480550cf0463dc9be664e64dcaf97bbf34d" exitCode=0 Jan 23 18:05:13 crc kubenswrapper[4718]: I0123 18:05:13.153796 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:05:13 crc kubenswrapper[4718]: I0123 18:05:13.156539 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njqgt" event={"ID":"41edd2c4-c9c7-4087-b474-a907966763be","Type":"ContainerDied","Data":"16b2a7c2aea7c31a66b982f25227f480550cf0463dc9be664e64dcaf97bbf34d"} Jan 23 18:05:13 crc kubenswrapper[4718]: I0123 18:05:13.156600 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njqgt" event={"ID":"41edd2c4-c9c7-4087-b474-a907966763be","Type":"ContainerStarted","Data":"322e1801ae9de393ab7053933e765fa5064a2f040fa56f4327c8bafefcfe1377"} Jan 23 18:05:14 crc kubenswrapper[4718]: I0123 18:05:14.163751 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njqgt" event={"ID":"41edd2c4-c9c7-4087-b474-a907966763be","Type":"ContainerStarted","Data":"00af0b2d5451ff4c313a4617f7b914c9faa9720c0b68376586d87725374e8019"} Jan 23 18:05:15 crc kubenswrapper[4718]: I0123 18:05:15.191739 4718 generic.go:334] "Generic (PLEG): container finished" podID="41edd2c4-c9c7-4087-b474-a907966763be" containerID="00af0b2d5451ff4c313a4617f7b914c9faa9720c0b68376586d87725374e8019" exitCode=0 Jan 23 18:05:15 crc kubenswrapper[4718]: I0123 18:05:15.192008 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njqgt" event={"ID":"41edd2c4-c9c7-4087-b474-a907966763be","Type":"ContainerDied","Data":"00af0b2d5451ff4c313a4617f7b914c9faa9720c0b68376586d87725374e8019"} Jan 23 18:05:16 crc kubenswrapper[4718]: I0123 18:05:16.205946 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njqgt" event={"ID":"41edd2c4-c9c7-4087-b474-a907966763be","Type":"ContainerStarted","Data":"79061c0f26ebec1d5b4375dcc87807cdf077dd3dd81284326db57cdf84e3094a"} Jan 23 18:05:16 crc kubenswrapper[4718]: I0123 18:05:16.238770 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-njqgt" podStartSLOduration=2.75404053 podStartE2EDuration="5.238750066s" podCreationTimestamp="2026-01-23 18:05:11 +0000 UTC" firstStartedPulling="2026-01-23 18:05:13.153347394 +0000 UTC m=+6514.300589385" lastFinishedPulling="2026-01-23 18:05:15.63805693 +0000 UTC m=+6516.785298921" observedRunningTime="2026-01-23 18:05:16.229302708 +0000 UTC m=+6517.376544699" watchObservedRunningTime="2026-01-23 18:05:16.238750066 +0000 UTC m=+6517.385992057" Jan 23 18:05:21 crc kubenswrapper[4718]: I0123 18:05:21.813408 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-njqgt" Jan 23 18:05:21 crc kubenswrapper[4718]: I0123 18:05:21.813999 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-njqgt" Jan 23 18:05:22 crc kubenswrapper[4718]: I0123 18:05:22.867997 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-njqgt" podUID="41edd2c4-c9c7-4087-b474-a907966763be" containerName="registry-server" probeResult="failure" output=< Jan 23 18:05:22 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Jan 23 18:05:22 crc kubenswrapper[4718]: > Jan 23 18:05:28 crc kubenswrapper[4718]: I0123 18:05:28.875435 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:05:28 crc kubenswrapper[4718]: I0123 18:05:28.875963 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:05:28 crc kubenswrapper[4718]: I0123 18:05:28.876006 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 18:05:28 crc kubenswrapper[4718]: I0123 18:05:28.876885 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2af9a707a11601b4c22694a5acbd4e62919b4bda565f4ac8cf4bce7939b44765"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:05:28 crc kubenswrapper[4718]: I0123 18:05:28.876941 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://2af9a707a11601b4c22694a5acbd4e62919b4bda565f4ac8cf4bce7939b44765" gracePeriod=600 Jan 23 18:05:29 crc kubenswrapper[4718]: I0123 18:05:29.354080 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="2af9a707a11601b4c22694a5acbd4e62919b4bda565f4ac8cf4bce7939b44765" exitCode=0 Jan 23 18:05:29 crc kubenswrapper[4718]: I0123 18:05:29.354133 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"2af9a707a11601b4c22694a5acbd4e62919b4bda565f4ac8cf4bce7939b44765"} Jan 23 18:05:29 crc kubenswrapper[4718]: I0123 18:05:29.354396 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc"} Jan 23 18:05:29 crc kubenswrapper[4718]: I0123 18:05:29.354420 4718 scope.go:117] "RemoveContainer" containerID="67b0b4d4c7564e6b7d86da00cf41ce2dff3ba0600086828bb7caea313849e2f5" Jan 23 18:05:31 crc kubenswrapper[4718]: I0123 18:05:31.895059 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-njqgt" Jan 23 18:05:31 crc kubenswrapper[4718]: I0123 18:05:31.952217 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-njqgt" Jan 23 18:05:32 crc kubenswrapper[4718]: I0123 18:05:32.131200 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-njqgt"] Jan 23 18:05:33 crc kubenswrapper[4718]: I0123 18:05:33.393761 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-njqgt" podUID="41edd2c4-c9c7-4087-b474-a907966763be" containerName="registry-server" containerID="cri-o://79061c0f26ebec1d5b4375dcc87807cdf077dd3dd81284326db57cdf84e3094a" gracePeriod=2 Jan 23 18:05:33 crc kubenswrapper[4718]: I0123 18:05:33.946092 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-njqgt" Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.056364 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kvz9\" (UniqueName: \"kubernetes.io/projected/41edd2c4-c9c7-4087-b474-a907966763be-kube-api-access-2kvz9\") pod \"41edd2c4-c9c7-4087-b474-a907966763be\" (UID: \"41edd2c4-c9c7-4087-b474-a907966763be\") " Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.056425 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41edd2c4-c9c7-4087-b474-a907966763be-catalog-content\") pod \"41edd2c4-c9c7-4087-b474-a907966763be\" (UID: \"41edd2c4-c9c7-4087-b474-a907966763be\") " Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.056761 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41edd2c4-c9c7-4087-b474-a907966763be-utilities\") pod \"41edd2c4-c9c7-4087-b474-a907966763be\" (UID: \"41edd2c4-c9c7-4087-b474-a907966763be\") " Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.057571 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41edd2c4-c9c7-4087-b474-a907966763be-utilities" (OuterVolumeSpecName: "utilities") pod "41edd2c4-c9c7-4087-b474-a907966763be" (UID: "41edd2c4-c9c7-4087-b474-a907966763be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.068280 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41edd2c4-c9c7-4087-b474-a907966763be-kube-api-access-2kvz9" (OuterVolumeSpecName: "kube-api-access-2kvz9") pod "41edd2c4-c9c7-4087-b474-a907966763be" (UID: "41edd2c4-c9c7-4087-b474-a907966763be"). InnerVolumeSpecName "kube-api-access-2kvz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.110039 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41edd2c4-c9c7-4087-b474-a907966763be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41edd2c4-c9c7-4087-b474-a907966763be" (UID: "41edd2c4-c9c7-4087-b474-a907966763be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.160460 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kvz9\" (UniqueName: \"kubernetes.io/projected/41edd2c4-c9c7-4087-b474-a907966763be-kube-api-access-2kvz9\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.160491 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41edd2c4-c9c7-4087-b474-a907966763be-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.160500 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41edd2c4-c9c7-4087-b474-a907966763be-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.409792 4718 generic.go:334] "Generic (PLEG): container finished" podID="41edd2c4-c9c7-4087-b474-a907966763be" containerID="79061c0f26ebec1d5b4375dcc87807cdf077dd3dd81284326db57cdf84e3094a" exitCode=0 Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.409860 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njqgt" event={"ID":"41edd2c4-c9c7-4087-b474-a907966763be","Type":"ContainerDied","Data":"79061c0f26ebec1d5b4375dcc87807cdf077dd3dd81284326db57cdf84e3094a"} Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.409910 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njqgt" event={"ID":"41edd2c4-c9c7-4087-b474-a907966763be","Type":"ContainerDied","Data":"322e1801ae9de393ab7053933e765fa5064a2f040fa56f4327c8bafefcfe1377"} Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.409929 4718 scope.go:117] "RemoveContainer" containerID="79061c0f26ebec1d5b4375dcc87807cdf077dd3dd81284326db57cdf84e3094a" Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.410159 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-njqgt" Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.450043 4718 scope.go:117] "RemoveContainer" containerID="00af0b2d5451ff4c313a4617f7b914c9faa9720c0b68376586d87725374e8019" Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.455569 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-njqgt"] Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.465929 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-njqgt"] Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.475276 4718 scope.go:117] "RemoveContainer" containerID="16b2a7c2aea7c31a66b982f25227f480550cf0463dc9be664e64dcaf97bbf34d" Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.541776 4718 scope.go:117] "RemoveContainer" containerID="79061c0f26ebec1d5b4375dcc87807cdf077dd3dd81284326db57cdf84e3094a" Jan 23 18:05:34 crc kubenswrapper[4718]: E0123 18:05:34.542496 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79061c0f26ebec1d5b4375dcc87807cdf077dd3dd81284326db57cdf84e3094a\": container with ID starting with 79061c0f26ebec1d5b4375dcc87807cdf077dd3dd81284326db57cdf84e3094a not found: ID does not exist" containerID="79061c0f26ebec1d5b4375dcc87807cdf077dd3dd81284326db57cdf84e3094a" Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.542550 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79061c0f26ebec1d5b4375dcc87807cdf077dd3dd81284326db57cdf84e3094a"} err="failed to get container status \"79061c0f26ebec1d5b4375dcc87807cdf077dd3dd81284326db57cdf84e3094a\": rpc error: code = NotFound desc = could not find container \"79061c0f26ebec1d5b4375dcc87807cdf077dd3dd81284326db57cdf84e3094a\": container with ID starting with 79061c0f26ebec1d5b4375dcc87807cdf077dd3dd81284326db57cdf84e3094a not found: ID does not exist" Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.542581 4718 scope.go:117] "RemoveContainer" containerID="00af0b2d5451ff4c313a4617f7b914c9faa9720c0b68376586d87725374e8019" Jan 23 18:05:34 crc kubenswrapper[4718]: E0123 18:05:34.543228 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00af0b2d5451ff4c313a4617f7b914c9faa9720c0b68376586d87725374e8019\": container with ID starting with 00af0b2d5451ff4c313a4617f7b914c9faa9720c0b68376586d87725374e8019 not found: ID does not exist" containerID="00af0b2d5451ff4c313a4617f7b914c9faa9720c0b68376586d87725374e8019" Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.543260 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00af0b2d5451ff4c313a4617f7b914c9faa9720c0b68376586d87725374e8019"} err="failed to get container status \"00af0b2d5451ff4c313a4617f7b914c9faa9720c0b68376586d87725374e8019\": rpc error: code = NotFound desc = could not find container \"00af0b2d5451ff4c313a4617f7b914c9faa9720c0b68376586d87725374e8019\": container with ID starting with 00af0b2d5451ff4c313a4617f7b914c9faa9720c0b68376586d87725374e8019 not found: ID does not exist" Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.543285 4718 scope.go:117] "RemoveContainer" containerID="16b2a7c2aea7c31a66b982f25227f480550cf0463dc9be664e64dcaf97bbf34d" Jan 23 18:05:34 crc kubenswrapper[4718]: E0123 18:05:34.543534 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16b2a7c2aea7c31a66b982f25227f480550cf0463dc9be664e64dcaf97bbf34d\": container with ID starting with 16b2a7c2aea7c31a66b982f25227f480550cf0463dc9be664e64dcaf97bbf34d not found: ID does not exist" containerID="16b2a7c2aea7c31a66b982f25227f480550cf0463dc9be664e64dcaf97bbf34d" Jan 23 18:05:34 crc kubenswrapper[4718]: I0123 18:05:34.543645 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16b2a7c2aea7c31a66b982f25227f480550cf0463dc9be664e64dcaf97bbf34d"} err="failed to get container status \"16b2a7c2aea7c31a66b982f25227f480550cf0463dc9be664e64dcaf97bbf34d\": rpc error: code = NotFound desc = could not find container \"16b2a7c2aea7c31a66b982f25227f480550cf0463dc9be664e64dcaf97bbf34d\": container with ID starting with 16b2a7c2aea7c31a66b982f25227f480550cf0463dc9be664e64dcaf97bbf34d not found: ID does not exist" Jan 23 18:05:35 crc kubenswrapper[4718]: I0123 18:05:35.156117 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41edd2c4-c9c7-4087-b474-a907966763be" path="/var/lib/kubelet/pods/41edd2c4-c9c7-4087-b474-a907966763be/volumes" Jan 23 18:06:10 crc kubenswrapper[4718]: I0123 18:06:10.641608 4718 scope.go:117] "RemoveContainer" containerID="ad97f90bfa992d1d52faac1d5960f95b69b3c3d5f7b8a214b08b9c97884c8ca7" Jan 23 18:06:57 crc kubenswrapper[4718]: I0123 18:06:57.408448 4718 generic.go:334] "Generic (PLEG): container finished" podID="5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276" containerID="cc6c8dad84ebc8fa6076f80e2a46946876c3a8e8d16faad0c5308e0a5d9f4791" exitCode=0 Jan 23 18:06:57 crc kubenswrapper[4718]: I0123 18:06:57.408562 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zjg6c/must-gather-z7bqw" event={"ID":"5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276","Type":"ContainerDied","Data":"cc6c8dad84ebc8fa6076f80e2a46946876c3a8e8d16faad0c5308e0a5d9f4791"} Jan 23 18:06:57 crc kubenswrapper[4718]: I0123 18:06:57.409762 4718 scope.go:117] "RemoveContainer" containerID="cc6c8dad84ebc8fa6076f80e2a46946876c3a8e8d16faad0c5308e0a5d9f4791" Jan 23 18:06:57 crc kubenswrapper[4718]: I0123 18:06:57.573876 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zjg6c_must-gather-z7bqw_5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276/gather/0.log" Jan 23 18:07:06 crc kubenswrapper[4718]: I0123 18:07:06.895348 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zjg6c/must-gather-z7bqw"] Jan 23 18:07:06 crc kubenswrapper[4718]: I0123 18:07:06.897501 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-zjg6c/must-gather-z7bqw" podUID="5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276" containerName="copy" containerID="cri-o://a573de383473c3f7c9a1d2f521a97a0b8cfd1e3c2242ef8dfe625c4601e8f6dd" gracePeriod=2 Jan 23 18:07:06 crc kubenswrapper[4718]: I0123 18:07:06.910859 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zjg6c/must-gather-z7bqw"] Jan 23 18:07:07 crc kubenswrapper[4718]: I0123 18:07:07.425951 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zjg6c_must-gather-z7bqw_5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276/copy/0.log" Jan 23 18:07:07 crc kubenswrapper[4718]: I0123 18:07:07.427717 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjg6c/must-gather-z7bqw" Jan 23 18:07:07 crc kubenswrapper[4718]: I0123 18:07:07.517762 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zjg6c_must-gather-z7bqw_5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276/copy/0.log" Jan 23 18:07:07 crc kubenswrapper[4718]: I0123 18:07:07.519091 4718 generic.go:334] "Generic (PLEG): container finished" podID="5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276" containerID="a573de383473c3f7c9a1d2f521a97a0b8cfd1e3c2242ef8dfe625c4601e8f6dd" exitCode=143 Jan 23 18:07:07 crc kubenswrapper[4718]: I0123 18:07:07.519188 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjg6c/must-gather-z7bqw" Jan 23 18:07:07 crc kubenswrapper[4718]: I0123 18:07:07.519193 4718 scope.go:117] "RemoveContainer" containerID="a573de383473c3f7c9a1d2f521a97a0b8cfd1e3c2242ef8dfe625c4601e8f6dd" Jan 23 18:07:07 crc kubenswrapper[4718]: I0123 18:07:07.546278 4718 scope.go:117] "RemoveContainer" containerID="cc6c8dad84ebc8fa6076f80e2a46946876c3a8e8d16faad0c5308e0a5d9f4791" Jan 23 18:07:07 crc kubenswrapper[4718]: I0123 18:07:07.582307 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276-must-gather-output\") pod \"5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276\" (UID: \"5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276\") " Jan 23 18:07:07 crc kubenswrapper[4718]: I0123 18:07:07.582545 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw9kh\" (UniqueName: \"kubernetes.io/projected/5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276-kube-api-access-pw9kh\") pod \"5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276\" (UID: \"5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276\") " Jan 23 18:07:07 crc kubenswrapper[4718]: I0123 18:07:07.591839 4718 scope.go:117] "RemoveContainer" containerID="a573de383473c3f7c9a1d2f521a97a0b8cfd1e3c2242ef8dfe625c4601e8f6dd" Jan 23 18:07:07 crc kubenswrapper[4718]: E0123 18:07:07.596138 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a573de383473c3f7c9a1d2f521a97a0b8cfd1e3c2242ef8dfe625c4601e8f6dd\": container with ID starting with a573de383473c3f7c9a1d2f521a97a0b8cfd1e3c2242ef8dfe625c4601e8f6dd not found: ID does not exist" containerID="a573de383473c3f7c9a1d2f521a97a0b8cfd1e3c2242ef8dfe625c4601e8f6dd" Jan 23 18:07:07 crc kubenswrapper[4718]: I0123 18:07:07.596194 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a573de383473c3f7c9a1d2f521a97a0b8cfd1e3c2242ef8dfe625c4601e8f6dd"} err="failed to get container status \"a573de383473c3f7c9a1d2f521a97a0b8cfd1e3c2242ef8dfe625c4601e8f6dd\": rpc error: code = NotFound desc = could not find container \"a573de383473c3f7c9a1d2f521a97a0b8cfd1e3c2242ef8dfe625c4601e8f6dd\": container with ID starting with a573de383473c3f7c9a1d2f521a97a0b8cfd1e3c2242ef8dfe625c4601e8f6dd not found: ID does not exist" Jan 23 18:07:07 crc kubenswrapper[4718]: I0123 18:07:07.596228 4718 scope.go:117] "RemoveContainer" containerID="cc6c8dad84ebc8fa6076f80e2a46946876c3a8e8d16faad0c5308e0a5d9f4791" Jan 23 18:07:07 crc kubenswrapper[4718]: E0123 18:07:07.598900 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc6c8dad84ebc8fa6076f80e2a46946876c3a8e8d16faad0c5308e0a5d9f4791\": container with ID starting with cc6c8dad84ebc8fa6076f80e2a46946876c3a8e8d16faad0c5308e0a5d9f4791 not found: ID does not exist" containerID="cc6c8dad84ebc8fa6076f80e2a46946876c3a8e8d16faad0c5308e0a5d9f4791" Jan 23 18:07:07 crc kubenswrapper[4718]: I0123 18:07:07.598950 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc6c8dad84ebc8fa6076f80e2a46946876c3a8e8d16faad0c5308e0a5d9f4791"} err="failed to get container status \"cc6c8dad84ebc8fa6076f80e2a46946876c3a8e8d16faad0c5308e0a5d9f4791\": rpc error: code = NotFound desc = could not find container \"cc6c8dad84ebc8fa6076f80e2a46946876c3a8e8d16faad0c5308e0a5d9f4791\": container with ID starting with cc6c8dad84ebc8fa6076f80e2a46946876c3a8e8d16faad0c5308e0a5d9f4791 not found: ID does not exist" Jan 23 18:07:07 crc kubenswrapper[4718]: I0123 18:07:07.598994 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276-kube-api-access-pw9kh" (OuterVolumeSpecName: "kube-api-access-pw9kh") pod "5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276" (UID: "5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276"). InnerVolumeSpecName "kube-api-access-pw9kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4718]: I0123 18:07:07.685670 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw9kh\" (UniqueName: \"kubernetes.io/projected/5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276-kube-api-access-pw9kh\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4718]: I0123 18:07:07.806393 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276" (UID: "5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4718]: I0123 18:07:07.890392 4718 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:09 crc kubenswrapper[4718]: I0123 18:07:09.159169 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276" path="/var/lib/kubelet/pods/5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276/volumes" Jan 23 18:07:10 crc kubenswrapper[4718]: I0123 18:07:10.724889 4718 scope.go:117] "RemoveContainer" containerID="1801e79fe8b36a48e9393136cf5935ccd3fa2f7e1cd338d76b69c686995640a8" Jan 23 18:07:58 crc kubenswrapper[4718]: I0123 18:07:58.875922 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:07:58 crc kubenswrapper[4718]: I0123 18:07:58.876335 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:08:28 crc kubenswrapper[4718]: I0123 18:08:28.875707 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:08:28 crc kubenswrapper[4718]: I0123 18:08:28.876264 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:08:58 crc kubenswrapper[4718]: I0123 18:08:58.876059 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:08:58 crc kubenswrapper[4718]: I0123 18:08:58.876615 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:08:58 crc kubenswrapper[4718]: I0123 18:08:58.876758 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 18:08:58 crc kubenswrapper[4718]: I0123 18:08:58.877812 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:08:58 crc kubenswrapper[4718]: I0123 18:08:58.877871 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" gracePeriod=600 Jan 23 18:08:59 crc kubenswrapper[4718]: E0123 18:08:59.000246 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:08:59 crc kubenswrapper[4718]: I0123 18:08:59.830742 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" exitCode=0 Jan 23 18:08:59 crc kubenswrapper[4718]: I0123 18:08:59.830820 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc"} Jan 23 18:08:59 crc kubenswrapper[4718]: I0123 18:08:59.831358 4718 scope.go:117] "RemoveContainer" containerID="2af9a707a11601b4c22694a5acbd4e62919b4bda565f4ac8cf4bce7939b44765" Jan 23 18:08:59 crc kubenswrapper[4718]: I0123 18:08:59.833701 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:08:59 crc kubenswrapper[4718]: E0123 18:08:59.834285 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:09:10 crc kubenswrapper[4718]: I0123 18:09:10.141762 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:09:10 crc kubenswrapper[4718]: E0123 18:09:10.142664 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:09:21 crc kubenswrapper[4718]: I0123 18:09:21.140559 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:09:21 crc kubenswrapper[4718]: E0123 18:09:21.141456 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.770845 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jfwxh"] Jan 23 18:09:28 crc kubenswrapper[4718]: E0123 18:09:28.772234 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41edd2c4-c9c7-4087-b474-a907966763be" containerName="registry-server" Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.772258 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="41edd2c4-c9c7-4087-b474-a907966763be" containerName="registry-server" Jan 23 18:09:28 crc kubenswrapper[4718]: E0123 18:09:28.772298 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41edd2c4-c9c7-4087-b474-a907966763be" containerName="extract-content" Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.772310 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="41edd2c4-c9c7-4087-b474-a907966763be" containerName="extract-content" Jan 23 18:09:28 crc kubenswrapper[4718]: E0123 18:09:28.772342 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276" containerName="copy" Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.772353 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276" containerName="copy" Jan 23 18:09:28 crc kubenswrapper[4718]: E0123 18:09:28.772423 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276" containerName="gather" Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.772434 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276" containerName="gather" Jan 23 18:09:28 crc kubenswrapper[4718]: E0123 18:09:28.772454 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41edd2c4-c9c7-4087-b474-a907966763be" containerName="extract-utilities" Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.772465 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="41edd2c4-c9c7-4087-b474-a907966763be" containerName="extract-utilities" Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.772863 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="41edd2c4-c9c7-4087-b474-a907966763be" containerName="registry-server" Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.772921 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276" containerName="gather" Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.772950 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cdd38b0-c5e0-41b2-a2b2-e0fd77ec4276" containerName="copy" Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.775708 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jfwxh" Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.784549 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jfwxh"] Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.893003 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d64ec48-235b-4147-b22c-a82c92a29c9f-catalog-content\") pod \"redhat-marketplace-jfwxh\" (UID: \"7d64ec48-235b-4147-b22c-a82c92a29c9f\") " pod="openshift-marketplace/redhat-marketplace-jfwxh" Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.893576 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdlnh\" (UniqueName: \"kubernetes.io/projected/7d64ec48-235b-4147-b22c-a82c92a29c9f-kube-api-access-mdlnh\") pod \"redhat-marketplace-jfwxh\" (UID: \"7d64ec48-235b-4147-b22c-a82c92a29c9f\") " pod="openshift-marketplace/redhat-marketplace-jfwxh" Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.893804 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d64ec48-235b-4147-b22c-a82c92a29c9f-utilities\") pod \"redhat-marketplace-jfwxh\" (UID: \"7d64ec48-235b-4147-b22c-a82c92a29c9f\") " pod="openshift-marketplace/redhat-marketplace-jfwxh" Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.995668 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdlnh\" (UniqueName: \"kubernetes.io/projected/7d64ec48-235b-4147-b22c-a82c92a29c9f-kube-api-access-mdlnh\") pod \"redhat-marketplace-jfwxh\" (UID: \"7d64ec48-235b-4147-b22c-a82c92a29c9f\") " pod="openshift-marketplace/redhat-marketplace-jfwxh" Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.995956 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d64ec48-235b-4147-b22c-a82c92a29c9f-utilities\") pod \"redhat-marketplace-jfwxh\" (UID: \"7d64ec48-235b-4147-b22c-a82c92a29c9f\") " pod="openshift-marketplace/redhat-marketplace-jfwxh" Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.996118 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d64ec48-235b-4147-b22c-a82c92a29c9f-catalog-content\") pod \"redhat-marketplace-jfwxh\" (UID: \"7d64ec48-235b-4147-b22c-a82c92a29c9f\") " pod="openshift-marketplace/redhat-marketplace-jfwxh" Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.996574 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d64ec48-235b-4147-b22c-a82c92a29c9f-utilities\") pod \"redhat-marketplace-jfwxh\" (UID: \"7d64ec48-235b-4147-b22c-a82c92a29c9f\") " pod="openshift-marketplace/redhat-marketplace-jfwxh" Jan 23 18:09:28 crc kubenswrapper[4718]: I0123 18:09:28.996586 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d64ec48-235b-4147-b22c-a82c92a29c9f-catalog-content\") pod \"redhat-marketplace-jfwxh\" (UID: \"7d64ec48-235b-4147-b22c-a82c92a29c9f\") " pod="openshift-marketplace/redhat-marketplace-jfwxh" Jan 23 18:09:29 crc kubenswrapper[4718]: I0123 18:09:29.016191 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdlnh\" (UniqueName: \"kubernetes.io/projected/7d64ec48-235b-4147-b22c-a82c92a29c9f-kube-api-access-mdlnh\") pod \"redhat-marketplace-jfwxh\" (UID: \"7d64ec48-235b-4147-b22c-a82c92a29c9f\") " pod="openshift-marketplace/redhat-marketplace-jfwxh" Jan 23 18:09:29 crc kubenswrapper[4718]: I0123 18:09:29.105330 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jfwxh" Jan 23 18:09:29 crc kubenswrapper[4718]: I0123 18:09:29.657328 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jfwxh"] Jan 23 18:09:30 crc kubenswrapper[4718]: I0123 18:09:30.173036 4718 generic.go:334] "Generic (PLEG): container finished" podID="7d64ec48-235b-4147-b22c-a82c92a29c9f" containerID="82c9a58237580a28197ce25d1b2c8f38a6b804c3416ce2e627283d80da0d7506" exitCode=0 Jan 23 18:09:30 crc kubenswrapper[4718]: I0123 18:09:30.173082 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jfwxh" event={"ID":"7d64ec48-235b-4147-b22c-a82c92a29c9f","Type":"ContainerDied","Data":"82c9a58237580a28197ce25d1b2c8f38a6b804c3416ce2e627283d80da0d7506"} Jan 23 18:09:30 crc kubenswrapper[4718]: I0123 18:09:30.173336 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jfwxh" event={"ID":"7d64ec48-235b-4147-b22c-a82c92a29c9f","Type":"ContainerStarted","Data":"143e101a10d92b11ebf8c5ef676643a0340b95581627c2ba70573fa001cfb7f3"} Jan 23 18:09:32 crc kubenswrapper[4718]: I0123 18:09:32.201331 4718 generic.go:334] "Generic (PLEG): container finished" podID="7d64ec48-235b-4147-b22c-a82c92a29c9f" containerID="6fd946df6f37da7952dbdcb54293515e0d5fa8f383da66e264c300695a3937f8" exitCode=0 Jan 23 18:09:32 crc kubenswrapper[4718]: I0123 18:09:32.201413 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jfwxh" event={"ID":"7d64ec48-235b-4147-b22c-a82c92a29c9f","Type":"ContainerDied","Data":"6fd946df6f37da7952dbdcb54293515e0d5fa8f383da66e264c300695a3937f8"} Jan 23 18:09:33 crc kubenswrapper[4718]: I0123 18:09:33.216194 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jfwxh" event={"ID":"7d64ec48-235b-4147-b22c-a82c92a29c9f","Type":"ContainerStarted","Data":"9194ae2ea57847bd7e473259029250af59c38b046a4e6b97fdc4253e742a54b8"} Jan 23 18:09:33 crc kubenswrapper[4718]: I0123 18:09:33.243002 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jfwxh" podStartSLOduration=2.645882398 podStartE2EDuration="5.242975921s" podCreationTimestamp="2026-01-23 18:09:28 +0000 UTC" firstStartedPulling="2026-01-23 18:09:30.176081263 +0000 UTC m=+6771.323323254" lastFinishedPulling="2026-01-23 18:09:32.773174796 +0000 UTC m=+6773.920416777" observedRunningTime="2026-01-23 18:09:33.232935758 +0000 UTC m=+6774.380177749" watchObservedRunningTime="2026-01-23 18:09:33.242975921 +0000 UTC m=+6774.390217912" Jan 23 18:09:33 crc kubenswrapper[4718]: I0123 18:09:33.582998 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gj9k7"] Jan 23 18:09:33 crc kubenswrapper[4718]: I0123 18:09:33.585814 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gj9k7" Jan 23 18:09:33 crc kubenswrapper[4718]: I0123 18:09:33.600343 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gj9k7"] Jan 23 18:09:33 crc kubenswrapper[4718]: I0123 18:09:33.713671 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30aa123b-aaa8-450c-972f-71ad68e7d0c8-utilities\") pod \"certified-operators-gj9k7\" (UID: \"30aa123b-aaa8-450c-972f-71ad68e7d0c8\") " pod="openshift-marketplace/certified-operators-gj9k7" Jan 23 18:09:33 crc kubenswrapper[4718]: I0123 18:09:33.713730 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xngmz\" (UniqueName: \"kubernetes.io/projected/30aa123b-aaa8-450c-972f-71ad68e7d0c8-kube-api-access-xngmz\") pod \"certified-operators-gj9k7\" (UID: \"30aa123b-aaa8-450c-972f-71ad68e7d0c8\") " pod="openshift-marketplace/certified-operators-gj9k7" Jan 23 18:09:33 crc kubenswrapper[4718]: I0123 18:09:33.713801 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30aa123b-aaa8-450c-972f-71ad68e7d0c8-catalog-content\") pod \"certified-operators-gj9k7\" (UID: \"30aa123b-aaa8-450c-972f-71ad68e7d0c8\") " pod="openshift-marketplace/certified-operators-gj9k7" Jan 23 18:09:33 crc kubenswrapper[4718]: I0123 18:09:33.817745 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30aa123b-aaa8-450c-972f-71ad68e7d0c8-utilities\") pod \"certified-operators-gj9k7\" (UID: \"30aa123b-aaa8-450c-972f-71ad68e7d0c8\") " pod="openshift-marketplace/certified-operators-gj9k7" Jan 23 18:09:33 crc kubenswrapper[4718]: I0123 18:09:33.817830 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xngmz\" (UniqueName: \"kubernetes.io/projected/30aa123b-aaa8-450c-972f-71ad68e7d0c8-kube-api-access-xngmz\") pod \"certified-operators-gj9k7\" (UID: \"30aa123b-aaa8-450c-972f-71ad68e7d0c8\") " pod="openshift-marketplace/certified-operators-gj9k7" Jan 23 18:09:33 crc kubenswrapper[4718]: I0123 18:09:33.817927 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30aa123b-aaa8-450c-972f-71ad68e7d0c8-catalog-content\") pod \"certified-operators-gj9k7\" (UID: \"30aa123b-aaa8-450c-972f-71ad68e7d0c8\") " pod="openshift-marketplace/certified-operators-gj9k7" Jan 23 18:09:33 crc kubenswrapper[4718]: I0123 18:09:33.818362 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30aa123b-aaa8-450c-972f-71ad68e7d0c8-utilities\") pod \"certified-operators-gj9k7\" (UID: \"30aa123b-aaa8-450c-972f-71ad68e7d0c8\") " pod="openshift-marketplace/certified-operators-gj9k7" Jan 23 18:09:33 crc kubenswrapper[4718]: I0123 18:09:33.818394 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30aa123b-aaa8-450c-972f-71ad68e7d0c8-catalog-content\") pod \"certified-operators-gj9k7\" (UID: \"30aa123b-aaa8-450c-972f-71ad68e7d0c8\") " pod="openshift-marketplace/certified-operators-gj9k7" Jan 23 18:09:33 crc kubenswrapper[4718]: I0123 18:09:33.855352 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xngmz\" (UniqueName: \"kubernetes.io/projected/30aa123b-aaa8-450c-972f-71ad68e7d0c8-kube-api-access-xngmz\") pod \"certified-operators-gj9k7\" (UID: \"30aa123b-aaa8-450c-972f-71ad68e7d0c8\") " pod="openshift-marketplace/certified-operators-gj9k7" Jan 23 18:09:33 crc kubenswrapper[4718]: I0123 18:09:33.926118 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gj9k7" Jan 23 18:09:34 crc kubenswrapper[4718]: I0123 18:09:34.140939 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:09:34 crc kubenswrapper[4718]: E0123 18:09:34.141585 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:09:34 crc kubenswrapper[4718]: I0123 18:09:34.334975 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gj9k7"] Jan 23 18:09:35 crc kubenswrapper[4718]: I0123 18:09:35.250191 4718 generic.go:334] "Generic (PLEG): container finished" podID="30aa123b-aaa8-450c-972f-71ad68e7d0c8" containerID="bf7097ea47fb2cd85b47d04ca144c47e738c55b3d179dbe565397bb2242b33e3" exitCode=0 Jan 23 18:09:35 crc kubenswrapper[4718]: I0123 18:09:35.250737 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj9k7" event={"ID":"30aa123b-aaa8-450c-972f-71ad68e7d0c8","Type":"ContainerDied","Data":"bf7097ea47fb2cd85b47d04ca144c47e738c55b3d179dbe565397bb2242b33e3"} Jan 23 18:09:35 crc kubenswrapper[4718]: I0123 18:09:35.250884 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj9k7" event={"ID":"30aa123b-aaa8-450c-972f-71ad68e7d0c8","Type":"ContainerStarted","Data":"2a8b4e63fa57e3c4d87725bbe4d76ffaa4227886aee9f604bd1356352bb1fe6c"} Jan 23 18:09:36 crc kubenswrapper[4718]: I0123 18:09:36.263006 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj9k7" event={"ID":"30aa123b-aaa8-450c-972f-71ad68e7d0c8","Type":"ContainerStarted","Data":"3d7a888712269eedf9a57d232c2a6ddecb01dc3c8d47fc57382e5a86d90e1259"} Jan 23 18:09:38 crc kubenswrapper[4718]: I0123 18:09:38.285706 4718 generic.go:334] "Generic (PLEG): container finished" podID="30aa123b-aaa8-450c-972f-71ad68e7d0c8" containerID="3d7a888712269eedf9a57d232c2a6ddecb01dc3c8d47fc57382e5a86d90e1259" exitCode=0 Jan 23 18:09:38 crc kubenswrapper[4718]: I0123 18:09:38.285818 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj9k7" event={"ID":"30aa123b-aaa8-450c-972f-71ad68e7d0c8","Type":"ContainerDied","Data":"3d7a888712269eedf9a57d232c2a6ddecb01dc3c8d47fc57382e5a86d90e1259"} Jan 23 18:09:39 crc kubenswrapper[4718]: I0123 18:09:39.105447 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jfwxh" Jan 23 18:09:39 crc kubenswrapper[4718]: I0123 18:09:39.105866 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jfwxh" Jan 23 18:09:39 crc kubenswrapper[4718]: I0123 18:09:39.204840 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jfwxh" Jan 23 18:09:39 crc kubenswrapper[4718]: I0123 18:09:39.312975 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj9k7" event={"ID":"30aa123b-aaa8-450c-972f-71ad68e7d0c8","Type":"ContainerStarted","Data":"7303f3ba386ec3f4828f87096d13469369881926d7b2c74f2507c2cdb47fc0ac"} Jan 23 18:09:39 crc kubenswrapper[4718]: I0123 18:09:39.343224 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gj9k7" podStartSLOduration=2.890076194 podStartE2EDuration="6.343202334s" podCreationTimestamp="2026-01-23 18:09:33 +0000 UTC" firstStartedPulling="2026-01-23 18:09:35.254693326 +0000 UTC m=+6776.401935317" lastFinishedPulling="2026-01-23 18:09:38.707819466 +0000 UTC m=+6779.855061457" observedRunningTime="2026-01-23 18:09:39.332474772 +0000 UTC m=+6780.479716763" watchObservedRunningTime="2026-01-23 18:09:39.343202334 +0000 UTC m=+6780.490444325" Jan 23 18:09:39 crc kubenswrapper[4718]: I0123 18:09:39.370219 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jfwxh" Jan 23 18:09:41 crc kubenswrapper[4718]: I0123 18:09:41.759080 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jfwxh"] Jan 23 18:09:41 crc kubenswrapper[4718]: I0123 18:09:41.759648 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jfwxh" podUID="7d64ec48-235b-4147-b22c-a82c92a29c9f" containerName="registry-server" containerID="cri-o://9194ae2ea57847bd7e473259029250af59c38b046a4e6b97fdc4253e742a54b8" gracePeriod=2 Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.272345 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jfwxh" Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.354679 4718 generic.go:334] "Generic (PLEG): container finished" podID="7d64ec48-235b-4147-b22c-a82c92a29c9f" containerID="9194ae2ea57847bd7e473259029250af59c38b046a4e6b97fdc4253e742a54b8" exitCode=0 Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.354722 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jfwxh" event={"ID":"7d64ec48-235b-4147-b22c-a82c92a29c9f","Type":"ContainerDied","Data":"9194ae2ea57847bd7e473259029250af59c38b046a4e6b97fdc4253e742a54b8"} Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.354748 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jfwxh" event={"ID":"7d64ec48-235b-4147-b22c-a82c92a29c9f","Type":"ContainerDied","Data":"143e101a10d92b11ebf8c5ef676643a0340b95581627c2ba70573fa001cfb7f3"} Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.354764 4718 scope.go:117] "RemoveContainer" containerID="9194ae2ea57847bd7e473259029250af59c38b046a4e6b97fdc4253e742a54b8" Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.354773 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jfwxh" Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.363981 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdlnh\" (UniqueName: \"kubernetes.io/projected/7d64ec48-235b-4147-b22c-a82c92a29c9f-kube-api-access-mdlnh\") pod \"7d64ec48-235b-4147-b22c-a82c92a29c9f\" (UID: \"7d64ec48-235b-4147-b22c-a82c92a29c9f\") " Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.364331 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d64ec48-235b-4147-b22c-a82c92a29c9f-utilities\") pod \"7d64ec48-235b-4147-b22c-a82c92a29c9f\" (UID: \"7d64ec48-235b-4147-b22c-a82c92a29c9f\") " Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.364461 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d64ec48-235b-4147-b22c-a82c92a29c9f-catalog-content\") pod \"7d64ec48-235b-4147-b22c-a82c92a29c9f\" (UID: \"7d64ec48-235b-4147-b22c-a82c92a29c9f\") " Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.365145 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d64ec48-235b-4147-b22c-a82c92a29c9f-utilities" (OuterVolumeSpecName: "utilities") pod "7d64ec48-235b-4147-b22c-a82c92a29c9f" (UID: "7d64ec48-235b-4147-b22c-a82c92a29c9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.377673 4718 scope.go:117] "RemoveContainer" containerID="6fd946df6f37da7952dbdcb54293515e0d5fa8f383da66e264c300695a3937f8" Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.380956 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d64ec48-235b-4147-b22c-a82c92a29c9f-kube-api-access-mdlnh" (OuterVolumeSpecName: "kube-api-access-mdlnh") pod "7d64ec48-235b-4147-b22c-a82c92a29c9f" (UID: "7d64ec48-235b-4147-b22c-a82c92a29c9f"). InnerVolumeSpecName "kube-api-access-mdlnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.384365 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d64ec48-235b-4147-b22c-a82c92a29c9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d64ec48-235b-4147-b22c-a82c92a29c9f" (UID: "7d64ec48-235b-4147-b22c-a82c92a29c9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.441939 4718 scope.go:117] "RemoveContainer" containerID="82c9a58237580a28197ce25d1b2c8f38a6b804c3416ce2e627283d80da0d7506" Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.466873 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d64ec48-235b-4147-b22c-a82c92a29c9f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.466903 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdlnh\" (UniqueName: \"kubernetes.io/projected/7d64ec48-235b-4147-b22c-a82c92a29c9f-kube-api-access-mdlnh\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.466914 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d64ec48-235b-4147-b22c-a82c92a29c9f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.498764 4718 scope.go:117] "RemoveContainer" containerID="9194ae2ea57847bd7e473259029250af59c38b046a4e6b97fdc4253e742a54b8" Jan 23 18:09:42 crc kubenswrapper[4718]: E0123 18:09:42.499335 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9194ae2ea57847bd7e473259029250af59c38b046a4e6b97fdc4253e742a54b8\": container with ID starting with 9194ae2ea57847bd7e473259029250af59c38b046a4e6b97fdc4253e742a54b8 not found: ID does not exist" containerID="9194ae2ea57847bd7e473259029250af59c38b046a4e6b97fdc4253e742a54b8" Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.499367 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9194ae2ea57847bd7e473259029250af59c38b046a4e6b97fdc4253e742a54b8"} err="failed to get container status \"9194ae2ea57847bd7e473259029250af59c38b046a4e6b97fdc4253e742a54b8\": rpc error: code = NotFound desc = could not find container \"9194ae2ea57847bd7e473259029250af59c38b046a4e6b97fdc4253e742a54b8\": container with ID starting with 9194ae2ea57847bd7e473259029250af59c38b046a4e6b97fdc4253e742a54b8 not found: ID does not exist" Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.499387 4718 scope.go:117] "RemoveContainer" containerID="6fd946df6f37da7952dbdcb54293515e0d5fa8f383da66e264c300695a3937f8" Jan 23 18:09:42 crc kubenswrapper[4718]: E0123 18:09:42.499861 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fd946df6f37da7952dbdcb54293515e0d5fa8f383da66e264c300695a3937f8\": container with ID starting with 6fd946df6f37da7952dbdcb54293515e0d5fa8f383da66e264c300695a3937f8 not found: ID does not exist" containerID="6fd946df6f37da7952dbdcb54293515e0d5fa8f383da66e264c300695a3937f8" Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.499877 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd946df6f37da7952dbdcb54293515e0d5fa8f383da66e264c300695a3937f8"} err="failed to get container status \"6fd946df6f37da7952dbdcb54293515e0d5fa8f383da66e264c300695a3937f8\": rpc error: code = NotFound desc = could not find container \"6fd946df6f37da7952dbdcb54293515e0d5fa8f383da66e264c300695a3937f8\": container with ID starting with 6fd946df6f37da7952dbdcb54293515e0d5fa8f383da66e264c300695a3937f8 not found: ID does not exist" Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.499888 4718 scope.go:117] "RemoveContainer" containerID="82c9a58237580a28197ce25d1b2c8f38a6b804c3416ce2e627283d80da0d7506" Jan 23 18:09:42 crc kubenswrapper[4718]: E0123 18:09:42.500135 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82c9a58237580a28197ce25d1b2c8f38a6b804c3416ce2e627283d80da0d7506\": container with ID starting with 82c9a58237580a28197ce25d1b2c8f38a6b804c3416ce2e627283d80da0d7506 not found: ID does not exist" containerID="82c9a58237580a28197ce25d1b2c8f38a6b804c3416ce2e627283d80da0d7506" Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.500152 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c9a58237580a28197ce25d1b2c8f38a6b804c3416ce2e627283d80da0d7506"} err="failed to get container status \"82c9a58237580a28197ce25d1b2c8f38a6b804c3416ce2e627283d80da0d7506\": rpc error: code = NotFound desc = could not find container \"82c9a58237580a28197ce25d1b2c8f38a6b804c3416ce2e627283d80da0d7506\": container with ID starting with 82c9a58237580a28197ce25d1b2c8f38a6b804c3416ce2e627283d80da0d7506 not found: ID does not exist" Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.691322 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jfwxh"] Jan 23 18:09:42 crc kubenswrapper[4718]: I0123 18:09:42.706806 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jfwxh"] Jan 23 18:09:43 crc kubenswrapper[4718]: I0123 18:09:43.155908 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d64ec48-235b-4147-b22c-a82c92a29c9f" path="/var/lib/kubelet/pods/7d64ec48-235b-4147-b22c-a82c92a29c9f/volumes" Jan 23 18:09:43 crc kubenswrapper[4718]: I0123 18:09:43.927291 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gj9k7" Jan 23 18:09:43 crc kubenswrapper[4718]: I0123 18:09:43.927402 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gj9k7" Jan 23 18:09:43 crc kubenswrapper[4718]: I0123 18:09:43.979548 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gj9k7" Jan 23 18:09:44 crc kubenswrapper[4718]: I0123 18:09:44.436653 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gj9k7" Jan 23 18:09:46 crc kubenswrapper[4718]: I0123 18:09:46.141455 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:09:46 crc kubenswrapper[4718]: E0123 18:09:46.142229 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:09:46 crc kubenswrapper[4718]: I0123 18:09:46.359998 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gj9k7"] Jan 23 18:09:46 crc kubenswrapper[4718]: I0123 18:09:46.397032 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gj9k7" podUID="30aa123b-aaa8-450c-972f-71ad68e7d0c8" containerName="registry-server" containerID="cri-o://7303f3ba386ec3f4828f87096d13469369881926d7b2c74f2507c2cdb47fc0ac" gracePeriod=2 Jan 23 18:09:46 crc kubenswrapper[4718]: I0123 18:09:46.903565 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gj9k7" Jan 23 18:09:46 crc kubenswrapper[4718]: I0123 18:09:46.991904 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xngmz\" (UniqueName: \"kubernetes.io/projected/30aa123b-aaa8-450c-972f-71ad68e7d0c8-kube-api-access-xngmz\") pod \"30aa123b-aaa8-450c-972f-71ad68e7d0c8\" (UID: \"30aa123b-aaa8-450c-972f-71ad68e7d0c8\") " Jan 23 18:09:46 crc kubenswrapper[4718]: I0123 18:09:46.992308 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30aa123b-aaa8-450c-972f-71ad68e7d0c8-utilities\") pod \"30aa123b-aaa8-450c-972f-71ad68e7d0c8\" (UID: \"30aa123b-aaa8-450c-972f-71ad68e7d0c8\") " Jan 23 18:09:46 crc kubenswrapper[4718]: I0123 18:09:46.992494 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30aa123b-aaa8-450c-972f-71ad68e7d0c8-catalog-content\") pod \"30aa123b-aaa8-450c-972f-71ad68e7d0c8\" (UID: \"30aa123b-aaa8-450c-972f-71ad68e7d0c8\") " Jan 23 18:09:46 crc kubenswrapper[4718]: I0123 18:09:46.993973 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30aa123b-aaa8-450c-972f-71ad68e7d0c8-utilities" (OuterVolumeSpecName: "utilities") pod "30aa123b-aaa8-450c-972f-71ad68e7d0c8" (UID: "30aa123b-aaa8-450c-972f-71ad68e7d0c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:46 crc kubenswrapper[4718]: I0123 18:09:46.999188 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30aa123b-aaa8-450c-972f-71ad68e7d0c8-kube-api-access-xngmz" (OuterVolumeSpecName: "kube-api-access-xngmz") pod "30aa123b-aaa8-450c-972f-71ad68e7d0c8" (UID: "30aa123b-aaa8-450c-972f-71ad68e7d0c8"). InnerVolumeSpecName "kube-api-access-xngmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.055259 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30aa123b-aaa8-450c-972f-71ad68e7d0c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30aa123b-aaa8-450c-972f-71ad68e7d0c8" (UID: "30aa123b-aaa8-450c-972f-71ad68e7d0c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.096560 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xngmz\" (UniqueName: \"kubernetes.io/projected/30aa123b-aaa8-450c-972f-71ad68e7d0c8-kube-api-access-xngmz\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.096607 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30aa123b-aaa8-450c-972f-71ad68e7d0c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.096617 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30aa123b-aaa8-450c-972f-71ad68e7d0c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.409011 4718 generic.go:334] "Generic (PLEG): container finished" podID="30aa123b-aaa8-450c-972f-71ad68e7d0c8" containerID="7303f3ba386ec3f4828f87096d13469369881926d7b2c74f2507c2cdb47fc0ac" exitCode=0 Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.409055 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj9k7" event={"ID":"30aa123b-aaa8-450c-972f-71ad68e7d0c8","Type":"ContainerDied","Data":"7303f3ba386ec3f4828f87096d13469369881926d7b2c74f2507c2cdb47fc0ac"} Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.409089 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj9k7" event={"ID":"30aa123b-aaa8-450c-972f-71ad68e7d0c8","Type":"ContainerDied","Data":"2a8b4e63fa57e3c4d87725bbe4d76ffaa4227886aee9f604bd1356352bb1fe6c"} Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.409107 4718 scope.go:117] "RemoveContainer" containerID="7303f3ba386ec3f4828f87096d13469369881926d7b2c74f2507c2cdb47fc0ac" Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.409120 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gj9k7" Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.437920 4718 scope.go:117] "RemoveContainer" containerID="3d7a888712269eedf9a57d232c2a6ddecb01dc3c8d47fc57382e5a86d90e1259" Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.438748 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gj9k7"] Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.454468 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gj9k7"] Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.460273 4718 scope.go:117] "RemoveContainer" containerID="bf7097ea47fb2cd85b47d04ca144c47e738c55b3d179dbe565397bb2242b33e3" Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.527312 4718 scope.go:117] "RemoveContainer" containerID="7303f3ba386ec3f4828f87096d13469369881926d7b2c74f2507c2cdb47fc0ac" Jan 23 18:09:47 crc kubenswrapper[4718]: E0123 18:09:47.527748 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7303f3ba386ec3f4828f87096d13469369881926d7b2c74f2507c2cdb47fc0ac\": container with ID starting with 7303f3ba386ec3f4828f87096d13469369881926d7b2c74f2507c2cdb47fc0ac not found: ID does not exist" containerID="7303f3ba386ec3f4828f87096d13469369881926d7b2c74f2507c2cdb47fc0ac" Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.527799 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7303f3ba386ec3f4828f87096d13469369881926d7b2c74f2507c2cdb47fc0ac"} err="failed to get container status \"7303f3ba386ec3f4828f87096d13469369881926d7b2c74f2507c2cdb47fc0ac\": rpc error: code = NotFound desc = could not find container \"7303f3ba386ec3f4828f87096d13469369881926d7b2c74f2507c2cdb47fc0ac\": container with ID starting with 7303f3ba386ec3f4828f87096d13469369881926d7b2c74f2507c2cdb47fc0ac not found: ID does not exist" Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.527818 4718 scope.go:117] "RemoveContainer" containerID="3d7a888712269eedf9a57d232c2a6ddecb01dc3c8d47fc57382e5a86d90e1259" Jan 23 18:09:47 crc kubenswrapper[4718]: E0123 18:09:47.528982 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d7a888712269eedf9a57d232c2a6ddecb01dc3c8d47fc57382e5a86d90e1259\": container with ID starting with 3d7a888712269eedf9a57d232c2a6ddecb01dc3c8d47fc57382e5a86d90e1259 not found: ID does not exist" containerID="3d7a888712269eedf9a57d232c2a6ddecb01dc3c8d47fc57382e5a86d90e1259" Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.529010 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d7a888712269eedf9a57d232c2a6ddecb01dc3c8d47fc57382e5a86d90e1259"} err="failed to get container status \"3d7a888712269eedf9a57d232c2a6ddecb01dc3c8d47fc57382e5a86d90e1259\": rpc error: code = NotFound desc = could not find container \"3d7a888712269eedf9a57d232c2a6ddecb01dc3c8d47fc57382e5a86d90e1259\": container with ID starting with 3d7a888712269eedf9a57d232c2a6ddecb01dc3c8d47fc57382e5a86d90e1259 not found: ID does not exist" Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.529030 4718 scope.go:117] "RemoveContainer" containerID="bf7097ea47fb2cd85b47d04ca144c47e738c55b3d179dbe565397bb2242b33e3" Jan 23 18:09:47 crc kubenswrapper[4718]: E0123 18:09:47.529255 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf7097ea47fb2cd85b47d04ca144c47e738c55b3d179dbe565397bb2242b33e3\": container with ID starting with bf7097ea47fb2cd85b47d04ca144c47e738c55b3d179dbe565397bb2242b33e3 not found: ID does not exist" containerID="bf7097ea47fb2cd85b47d04ca144c47e738c55b3d179dbe565397bb2242b33e3" Jan 23 18:09:47 crc kubenswrapper[4718]: I0123 18:09:47.529282 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf7097ea47fb2cd85b47d04ca144c47e738c55b3d179dbe565397bb2242b33e3"} err="failed to get container status \"bf7097ea47fb2cd85b47d04ca144c47e738c55b3d179dbe565397bb2242b33e3\": rpc error: code = NotFound desc = could not find container \"bf7097ea47fb2cd85b47d04ca144c47e738c55b3d179dbe565397bb2242b33e3\": container with ID starting with bf7097ea47fb2cd85b47d04ca144c47e738c55b3d179dbe565397bb2242b33e3 not found: ID does not exist" Jan 23 18:09:49 crc kubenswrapper[4718]: I0123 18:09:49.154621 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30aa123b-aaa8-450c-972f-71ad68e7d0c8" path="/var/lib/kubelet/pods/30aa123b-aaa8-450c-972f-71ad68e7d0c8/volumes" Jan 23 18:09:57 crc kubenswrapper[4718]: I0123 18:09:57.140227 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:09:57 crc kubenswrapper[4718]: E0123 18:09:57.141109 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:10:09 crc kubenswrapper[4718]: I0123 18:10:09.150046 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:10:09 crc kubenswrapper[4718]: E0123 18:10:09.150843 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:10:20 crc kubenswrapper[4718]: I0123 18:10:20.141333 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:10:20 crc kubenswrapper[4718]: E0123 18:10:20.142184 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:10:34 crc kubenswrapper[4718]: I0123 18:10:34.140489 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:10:34 crc kubenswrapper[4718]: E0123 18:10:34.141564 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.704343 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mk2dv/must-gather-k4zqv"] Jan 23 18:10:43 crc kubenswrapper[4718]: E0123 18:10:43.706251 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d64ec48-235b-4147-b22c-a82c92a29c9f" containerName="extract-utilities" Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.706285 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d64ec48-235b-4147-b22c-a82c92a29c9f" containerName="extract-utilities" Jan 23 18:10:43 crc kubenswrapper[4718]: E0123 18:10:43.706332 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d64ec48-235b-4147-b22c-a82c92a29c9f" containerName="registry-server" Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.706341 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d64ec48-235b-4147-b22c-a82c92a29c9f" containerName="registry-server" Jan 23 18:10:43 crc kubenswrapper[4718]: E0123 18:10:43.706353 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30aa123b-aaa8-450c-972f-71ad68e7d0c8" containerName="registry-server" Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.706361 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="30aa123b-aaa8-450c-972f-71ad68e7d0c8" containerName="registry-server" Jan 23 18:10:43 crc kubenswrapper[4718]: E0123 18:10:43.706380 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30aa123b-aaa8-450c-972f-71ad68e7d0c8" containerName="extract-utilities" Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.706387 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="30aa123b-aaa8-450c-972f-71ad68e7d0c8" containerName="extract-utilities" Jan 23 18:10:43 crc kubenswrapper[4718]: E0123 18:10:43.706399 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30aa123b-aaa8-450c-972f-71ad68e7d0c8" containerName="extract-content" Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.706407 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="30aa123b-aaa8-450c-972f-71ad68e7d0c8" containerName="extract-content" Jan 23 18:10:43 crc kubenswrapper[4718]: E0123 18:10:43.706430 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d64ec48-235b-4147-b22c-a82c92a29c9f" containerName="extract-content" Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.706438 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d64ec48-235b-4147-b22c-a82c92a29c9f" containerName="extract-content" Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.706789 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d64ec48-235b-4147-b22c-a82c92a29c9f" containerName="registry-server" Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.706825 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="30aa123b-aaa8-450c-972f-71ad68e7d0c8" containerName="registry-server" Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.708485 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mk2dv/must-gather-k4zqv" Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.719911 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mk2dv"/"kube-root-ca.crt" Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.721958 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mk2dv"/"openshift-service-ca.crt" Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.732287 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mk2dv/must-gather-k4zqv"] Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.811884 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5j48\" (UniqueName: \"kubernetes.io/projected/83eabaad-0a68-4c52-8829-89142370eab1-kube-api-access-t5j48\") pod \"must-gather-k4zqv\" (UID: \"83eabaad-0a68-4c52-8829-89142370eab1\") " pod="openshift-must-gather-mk2dv/must-gather-k4zqv" Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.812328 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/83eabaad-0a68-4c52-8829-89142370eab1-must-gather-output\") pod \"must-gather-k4zqv\" (UID: \"83eabaad-0a68-4c52-8829-89142370eab1\") " pod="openshift-must-gather-mk2dv/must-gather-k4zqv" Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.915159 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/83eabaad-0a68-4c52-8829-89142370eab1-must-gather-output\") pod \"must-gather-k4zqv\" (UID: \"83eabaad-0a68-4c52-8829-89142370eab1\") " pod="openshift-must-gather-mk2dv/must-gather-k4zqv" Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.915360 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5j48\" (UniqueName: \"kubernetes.io/projected/83eabaad-0a68-4c52-8829-89142370eab1-kube-api-access-t5j48\") pod \"must-gather-k4zqv\" (UID: \"83eabaad-0a68-4c52-8829-89142370eab1\") " pod="openshift-must-gather-mk2dv/must-gather-k4zqv" Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.915817 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/83eabaad-0a68-4c52-8829-89142370eab1-must-gather-output\") pod \"must-gather-k4zqv\" (UID: \"83eabaad-0a68-4c52-8829-89142370eab1\") " pod="openshift-must-gather-mk2dv/must-gather-k4zqv" Jan 23 18:10:43 crc kubenswrapper[4718]: I0123 18:10:43.941758 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5j48\" (UniqueName: \"kubernetes.io/projected/83eabaad-0a68-4c52-8829-89142370eab1-kube-api-access-t5j48\") pod \"must-gather-k4zqv\" (UID: \"83eabaad-0a68-4c52-8829-89142370eab1\") " pod="openshift-must-gather-mk2dv/must-gather-k4zqv" Jan 23 18:10:44 crc kubenswrapper[4718]: I0123 18:10:44.057309 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mk2dv/must-gather-k4zqv" Jan 23 18:10:44 crc kubenswrapper[4718]: I0123 18:10:44.613862 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mk2dv/must-gather-k4zqv"] Jan 23 18:10:45 crc kubenswrapper[4718]: I0123 18:10:45.071782 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mk2dv/must-gather-k4zqv" event={"ID":"83eabaad-0a68-4c52-8829-89142370eab1","Type":"ContainerStarted","Data":"b6136415d56e87f43fc64b2ce89ef7ac0d18ff58bddac5651de40c8b74347e68"} Jan 23 18:10:45 crc kubenswrapper[4718]: I0123 18:10:45.071822 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mk2dv/must-gather-k4zqv" event={"ID":"83eabaad-0a68-4c52-8829-89142370eab1","Type":"ContainerStarted","Data":"4cf7197bd9058d6f7f05db872e695d4401e5aa5dfa9f9ac72cee0b1585f1deb0"} Jan 23 18:10:46 crc kubenswrapper[4718]: I0123 18:10:46.084180 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mk2dv/must-gather-k4zqv" event={"ID":"83eabaad-0a68-4c52-8829-89142370eab1","Type":"ContainerStarted","Data":"a5248148424ff6320c95cf92b69e62c2b36f33eff824350205a651a68b80a3f4"} Jan 23 18:10:46 crc kubenswrapper[4718]: I0123 18:10:46.105996 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mk2dv/must-gather-k4zqv" podStartSLOduration=3.105975007 podStartE2EDuration="3.105975007s" podCreationTimestamp="2026-01-23 18:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:10:46.09906867 +0000 UTC m=+6847.246310671" watchObservedRunningTime="2026-01-23 18:10:46.105975007 +0000 UTC m=+6847.253216998" Jan 23 18:10:48 crc kubenswrapper[4718]: I0123 18:10:48.141970 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:10:48 crc kubenswrapper[4718]: E0123 18:10:48.142543 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:10:48 crc kubenswrapper[4718]: I0123 18:10:48.706755 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mk2dv/crc-debug-5dclw"] Jan 23 18:10:48 crc kubenswrapper[4718]: I0123 18:10:48.708788 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mk2dv/crc-debug-5dclw" Jan 23 18:10:48 crc kubenswrapper[4718]: I0123 18:10:48.712520 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mk2dv"/"default-dockercfg-wmwct" Jan 23 18:10:48 crc kubenswrapper[4718]: I0123 18:10:48.841693 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2nc2\" (UniqueName: \"kubernetes.io/projected/594aab5f-d308-4da7-ac51-38cce24784ae-kube-api-access-s2nc2\") pod \"crc-debug-5dclw\" (UID: \"594aab5f-d308-4da7-ac51-38cce24784ae\") " pod="openshift-must-gather-mk2dv/crc-debug-5dclw" Jan 23 18:10:48 crc kubenswrapper[4718]: I0123 18:10:48.842104 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/594aab5f-d308-4da7-ac51-38cce24784ae-host\") pod \"crc-debug-5dclw\" (UID: \"594aab5f-d308-4da7-ac51-38cce24784ae\") " pod="openshift-must-gather-mk2dv/crc-debug-5dclw" Jan 23 18:10:48 crc kubenswrapper[4718]: I0123 18:10:48.944599 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/594aab5f-d308-4da7-ac51-38cce24784ae-host\") pod \"crc-debug-5dclw\" (UID: \"594aab5f-d308-4da7-ac51-38cce24784ae\") " pod="openshift-must-gather-mk2dv/crc-debug-5dclw" Jan 23 18:10:48 crc kubenswrapper[4718]: I0123 18:10:48.944718 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/594aab5f-d308-4da7-ac51-38cce24784ae-host\") pod \"crc-debug-5dclw\" (UID: \"594aab5f-d308-4da7-ac51-38cce24784ae\") " pod="openshift-must-gather-mk2dv/crc-debug-5dclw" Jan 23 18:10:48 crc kubenswrapper[4718]: I0123 18:10:48.944877 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2nc2\" (UniqueName: \"kubernetes.io/projected/594aab5f-d308-4da7-ac51-38cce24784ae-kube-api-access-s2nc2\") pod \"crc-debug-5dclw\" (UID: \"594aab5f-d308-4da7-ac51-38cce24784ae\") " pod="openshift-must-gather-mk2dv/crc-debug-5dclw" Jan 23 18:10:48 crc kubenswrapper[4718]: I0123 18:10:48.968146 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2nc2\" (UniqueName: \"kubernetes.io/projected/594aab5f-d308-4da7-ac51-38cce24784ae-kube-api-access-s2nc2\") pod \"crc-debug-5dclw\" (UID: \"594aab5f-d308-4da7-ac51-38cce24784ae\") " pod="openshift-must-gather-mk2dv/crc-debug-5dclw" Jan 23 18:10:49 crc kubenswrapper[4718]: I0123 18:10:49.028357 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mk2dv/crc-debug-5dclw" Jan 23 18:10:49 crc kubenswrapper[4718]: I0123 18:10:49.212435 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mk2dv/crc-debug-5dclw" event={"ID":"594aab5f-d308-4da7-ac51-38cce24784ae","Type":"ContainerStarted","Data":"aed3b7cf292f28a714a305cd60b10c9288e77015ff36490aba1a679e5cdf5c61"} Jan 23 18:10:50 crc kubenswrapper[4718]: I0123 18:10:50.228792 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mk2dv/crc-debug-5dclw" event={"ID":"594aab5f-d308-4da7-ac51-38cce24784ae","Type":"ContainerStarted","Data":"8cc9eb2cc70617a7addeab8099a5a51167ac76a25b9a4c82138de030c597e86e"} Jan 23 18:10:50 crc kubenswrapper[4718]: I0123 18:10:50.250897 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mk2dv/crc-debug-5dclw" podStartSLOduration=2.25087254 podStartE2EDuration="2.25087254s" podCreationTimestamp="2026-01-23 18:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:10:50.24279853 +0000 UTC m=+6851.390040541" watchObservedRunningTime="2026-01-23 18:10:50.25087254 +0000 UTC m=+6851.398114531" Jan 23 18:11:00 crc kubenswrapper[4718]: I0123 18:11:00.140771 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:11:00 crc kubenswrapper[4718]: E0123 18:11:00.141833 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:11:12 crc kubenswrapper[4718]: I0123 18:11:12.141213 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:11:12 crc kubenswrapper[4718]: E0123 18:11:12.142117 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:11:24 crc kubenswrapper[4718]: I0123 18:11:24.140180 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:11:24 crc kubenswrapper[4718]: E0123 18:11:24.140968 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:11:36 crc kubenswrapper[4718]: I0123 18:11:36.140704 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:11:36 crc kubenswrapper[4718]: E0123 18:11:36.141819 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:11:38 crc kubenswrapper[4718]: I0123 18:11:38.948265 4718 generic.go:334] "Generic (PLEG): container finished" podID="594aab5f-d308-4da7-ac51-38cce24784ae" containerID="8cc9eb2cc70617a7addeab8099a5a51167ac76a25b9a4c82138de030c597e86e" exitCode=0 Jan 23 18:11:38 crc kubenswrapper[4718]: I0123 18:11:38.948469 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mk2dv/crc-debug-5dclw" event={"ID":"594aab5f-d308-4da7-ac51-38cce24784ae","Type":"ContainerDied","Data":"8cc9eb2cc70617a7addeab8099a5a51167ac76a25b9a4c82138de030c597e86e"} Jan 23 18:11:40 crc kubenswrapper[4718]: I0123 18:11:40.110570 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mk2dv/crc-debug-5dclw" Jan 23 18:11:40 crc kubenswrapper[4718]: I0123 18:11:40.150806 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mk2dv/crc-debug-5dclw"] Jan 23 18:11:40 crc kubenswrapper[4718]: I0123 18:11:40.161293 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mk2dv/crc-debug-5dclw"] Jan 23 18:11:40 crc kubenswrapper[4718]: I0123 18:11:40.247815 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/594aab5f-d308-4da7-ac51-38cce24784ae-host\") pod \"594aab5f-d308-4da7-ac51-38cce24784ae\" (UID: \"594aab5f-d308-4da7-ac51-38cce24784ae\") " Jan 23 18:11:40 crc kubenswrapper[4718]: I0123 18:11:40.247932 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/594aab5f-d308-4da7-ac51-38cce24784ae-host" (OuterVolumeSpecName: "host") pod "594aab5f-d308-4da7-ac51-38cce24784ae" (UID: "594aab5f-d308-4da7-ac51-38cce24784ae"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:11:40 crc kubenswrapper[4718]: I0123 18:11:40.247991 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2nc2\" (UniqueName: \"kubernetes.io/projected/594aab5f-d308-4da7-ac51-38cce24784ae-kube-api-access-s2nc2\") pod \"594aab5f-d308-4da7-ac51-38cce24784ae\" (UID: \"594aab5f-d308-4da7-ac51-38cce24784ae\") " Jan 23 18:11:40 crc kubenswrapper[4718]: I0123 18:11:40.248986 4718 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/594aab5f-d308-4da7-ac51-38cce24784ae-host\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:40 crc kubenswrapper[4718]: I0123 18:11:40.253908 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/594aab5f-d308-4da7-ac51-38cce24784ae-kube-api-access-s2nc2" (OuterVolumeSpecName: "kube-api-access-s2nc2") pod "594aab5f-d308-4da7-ac51-38cce24784ae" (UID: "594aab5f-d308-4da7-ac51-38cce24784ae"). InnerVolumeSpecName "kube-api-access-s2nc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:11:40 crc kubenswrapper[4718]: I0123 18:11:40.351470 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2nc2\" (UniqueName: \"kubernetes.io/projected/594aab5f-d308-4da7-ac51-38cce24784ae-kube-api-access-s2nc2\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:40 crc kubenswrapper[4718]: I0123 18:11:40.973159 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aed3b7cf292f28a714a305cd60b10c9288e77015ff36490aba1a679e5cdf5c61" Jan 23 18:11:40 crc kubenswrapper[4718]: I0123 18:11:40.973266 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mk2dv/crc-debug-5dclw" Jan 23 18:11:41 crc kubenswrapper[4718]: I0123 18:11:41.156321 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="594aab5f-d308-4da7-ac51-38cce24784ae" path="/var/lib/kubelet/pods/594aab5f-d308-4da7-ac51-38cce24784ae/volumes" Jan 23 18:11:41 crc kubenswrapper[4718]: I0123 18:11:41.451470 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mk2dv/crc-debug-5v6s6"] Jan 23 18:11:41 crc kubenswrapper[4718]: E0123 18:11:41.452074 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="594aab5f-d308-4da7-ac51-38cce24784ae" containerName="container-00" Jan 23 18:11:41 crc kubenswrapper[4718]: I0123 18:11:41.452092 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="594aab5f-d308-4da7-ac51-38cce24784ae" containerName="container-00" Jan 23 18:11:41 crc kubenswrapper[4718]: I0123 18:11:41.452329 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="594aab5f-d308-4da7-ac51-38cce24784ae" containerName="container-00" Jan 23 18:11:41 crc kubenswrapper[4718]: I0123 18:11:41.453118 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mk2dv/crc-debug-5v6s6" Jan 23 18:11:41 crc kubenswrapper[4718]: I0123 18:11:41.459136 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mk2dv"/"default-dockercfg-wmwct" Jan 23 18:11:41 crc kubenswrapper[4718]: I0123 18:11:41.583649 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aad34044-4bdc-4f3e-b311-f91e27b3c43d-host\") pod \"crc-debug-5v6s6\" (UID: \"aad34044-4bdc-4f3e-b311-f91e27b3c43d\") " pod="openshift-must-gather-mk2dv/crc-debug-5v6s6" Jan 23 18:11:41 crc kubenswrapper[4718]: I0123 18:11:41.584043 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58m25\" (UniqueName: \"kubernetes.io/projected/aad34044-4bdc-4f3e-b311-f91e27b3c43d-kube-api-access-58m25\") pod \"crc-debug-5v6s6\" (UID: \"aad34044-4bdc-4f3e-b311-f91e27b3c43d\") " pod="openshift-must-gather-mk2dv/crc-debug-5v6s6" Jan 23 18:11:41 crc kubenswrapper[4718]: I0123 18:11:41.686775 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aad34044-4bdc-4f3e-b311-f91e27b3c43d-host\") pod \"crc-debug-5v6s6\" (UID: \"aad34044-4bdc-4f3e-b311-f91e27b3c43d\") " pod="openshift-must-gather-mk2dv/crc-debug-5v6s6" Jan 23 18:11:41 crc kubenswrapper[4718]: I0123 18:11:41.686926 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aad34044-4bdc-4f3e-b311-f91e27b3c43d-host\") pod \"crc-debug-5v6s6\" (UID: \"aad34044-4bdc-4f3e-b311-f91e27b3c43d\") " pod="openshift-must-gather-mk2dv/crc-debug-5v6s6" Jan 23 18:11:41 crc kubenswrapper[4718]: I0123 18:11:41.687102 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58m25\" (UniqueName: \"kubernetes.io/projected/aad34044-4bdc-4f3e-b311-f91e27b3c43d-kube-api-access-58m25\") pod \"crc-debug-5v6s6\" (UID: \"aad34044-4bdc-4f3e-b311-f91e27b3c43d\") " pod="openshift-must-gather-mk2dv/crc-debug-5v6s6" Jan 23 18:11:41 crc kubenswrapper[4718]: I0123 18:11:41.707308 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58m25\" (UniqueName: \"kubernetes.io/projected/aad34044-4bdc-4f3e-b311-f91e27b3c43d-kube-api-access-58m25\") pod \"crc-debug-5v6s6\" (UID: \"aad34044-4bdc-4f3e-b311-f91e27b3c43d\") " pod="openshift-must-gather-mk2dv/crc-debug-5v6s6" Jan 23 18:11:41 crc kubenswrapper[4718]: I0123 18:11:41.775678 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mk2dv/crc-debug-5v6s6" Jan 23 18:11:41 crc kubenswrapper[4718]: W0123 18:11:41.855171 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaad34044_4bdc_4f3e_b311_f91e27b3c43d.slice/crio-69387b69225377bd3178a861739abe90228f2e685c1e412ac2c35ba09a5324cc WatchSource:0}: Error finding container 69387b69225377bd3178a861739abe90228f2e685c1e412ac2c35ba09a5324cc: Status 404 returned error can't find the container with id 69387b69225377bd3178a861739abe90228f2e685c1e412ac2c35ba09a5324cc Jan 23 18:11:41 crc kubenswrapper[4718]: I0123 18:11:41.985228 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mk2dv/crc-debug-5v6s6" event={"ID":"aad34044-4bdc-4f3e-b311-f91e27b3c43d","Type":"ContainerStarted","Data":"69387b69225377bd3178a861739abe90228f2e685c1e412ac2c35ba09a5324cc"} Jan 23 18:11:42 crc kubenswrapper[4718]: I0123 18:11:42.996767 4718 generic.go:334] "Generic (PLEG): container finished" podID="aad34044-4bdc-4f3e-b311-f91e27b3c43d" containerID="b19b32d416ae11e68861eed0246c57e6bd7a9141d7843336e441522836cb8a3f" exitCode=0 Jan 23 18:11:42 crc kubenswrapper[4718]: I0123 18:11:42.996868 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mk2dv/crc-debug-5v6s6" event={"ID":"aad34044-4bdc-4f3e-b311-f91e27b3c43d","Type":"ContainerDied","Data":"b19b32d416ae11e68861eed0246c57e6bd7a9141d7843336e441522836cb8a3f"} Jan 23 18:11:44 crc kubenswrapper[4718]: I0123 18:11:44.156460 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mk2dv/crc-debug-5v6s6" Jan 23 18:11:44 crc kubenswrapper[4718]: I0123 18:11:44.252527 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58m25\" (UniqueName: \"kubernetes.io/projected/aad34044-4bdc-4f3e-b311-f91e27b3c43d-kube-api-access-58m25\") pod \"aad34044-4bdc-4f3e-b311-f91e27b3c43d\" (UID: \"aad34044-4bdc-4f3e-b311-f91e27b3c43d\") " Jan 23 18:11:44 crc kubenswrapper[4718]: I0123 18:11:44.252840 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aad34044-4bdc-4f3e-b311-f91e27b3c43d-host\") pod \"aad34044-4bdc-4f3e-b311-f91e27b3c43d\" (UID: \"aad34044-4bdc-4f3e-b311-f91e27b3c43d\") " Jan 23 18:11:44 crc kubenswrapper[4718]: I0123 18:11:44.254257 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad34044-4bdc-4f3e-b311-f91e27b3c43d-host" (OuterVolumeSpecName: "host") pod "aad34044-4bdc-4f3e-b311-f91e27b3c43d" (UID: "aad34044-4bdc-4f3e-b311-f91e27b3c43d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:11:44 crc kubenswrapper[4718]: I0123 18:11:44.255356 4718 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aad34044-4bdc-4f3e-b311-f91e27b3c43d-host\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:44 crc kubenswrapper[4718]: I0123 18:11:44.287000 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad34044-4bdc-4f3e-b311-f91e27b3c43d-kube-api-access-58m25" (OuterVolumeSpecName: "kube-api-access-58m25") pod "aad34044-4bdc-4f3e-b311-f91e27b3c43d" (UID: "aad34044-4bdc-4f3e-b311-f91e27b3c43d"). InnerVolumeSpecName "kube-api-access-58m25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:11:44 crc kubenswrapper[4718]: I0123 18:11:44.357383 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58m25\" (UniqueName: \"kubernetes.io/projected/aad34044-4bdc-4f3e-b311-f91e27b3c43d-kube-api-access-58m25\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:45 crc kubenswrapper[4718]: I0123 18:11:45.025057 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mk2dv/crc-debug-5v6s6" event={"ID":"aad34044-4bdc-4f3e-b311-f91e27b3c43d","Type":"ContainerDied","Data":"69387b69225377bd3178a861739abe90228f2e685c1e412ac2c35ba09a5324cc"} Jan 23 18:11:45 crc kubenswrapper[4718]: I0123 18:11:45.025395 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69387b69225377bd3178a861739abe90228f2e685c1e412ac2c35ba09a5324cc" Jan 23 18:11:45 crc kubenswrapper[4718]: I0123 18:11:45.025146 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mk2dv/crc-debug-5v6s6" Jan 23 18:11:45 crc kubenswrapper[4718]: I0123 18:11:45.457126 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mk2dv/crc-debug-5v6s6"] Jan 23 18:11:45 crc kubenswrapper[4718]: I0123 18:11:45.471854 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mk2dv/crc-debug-5v6s6"] Jan 23 18:11:46 crc kubenswrapper[4718]: I0123 18:11:46.739912 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mk2dv/crc-debug-tn6rb"] Jan 23 18:11:46 crc kubenswrapper[4718]: E0123 18:11:46.740664 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad34044-4bdc-4f3e-b311-f91e27b3c43d" containerName="container-00" Jan 23 18:11:46 crc kubenswrapper[4718]: I0123 18:11:46.740677 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad34044-4bdc-4f3e-b311-f91e27b3c43d" containerName="container-00" Jan 23 18:11:46 crc kubenswrapper[4718]: I0123 18:11:46.747667 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad34044-4bdc-4f3e-b311-f91e27b3c43d" containerName="container-00" Jan 23 18:11:46 crc kubenswrapper[4718]: I0123 18:11:46.748772 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mk2dv/crc-debug-tn6rb" Jan 23 18:11:46 crc kubenswrapper[4718]: I0123 18:11:46.750832 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mk2dv"/"default-dockercfg-wmwct" Jan 23 18:11:46 crc kubenswrapper[4718]: I0123 18:11:46.810414 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48255b18-be57-4299-8651-104f694b9299-host\") pod \"crc-debug-tn6rb\" (UID: \"48255b18-be57-4299-8651-104f694b9299\") " pod="openshift-must-gather-mk2dv/crc-debug-tn6rb" Jan 23 18:11:46 crc kubenswrapper[4718]: I0123 18:11:46.810487 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt6bg\" (UniqueName: \"kubernetes.io/projected/48255b18-be57-4299-8651-104f694b9299-kube-api-access-jt6bg\") pod \"crc-debug-tn6rb\" (UID: \"48255b18-be57-4299-8651-104f694b9299\") " pod="openshift-must-gather-mk2dv/crc-debug-tn6rb" Jan 23 18:11:46 crc kubenswrapper[4718]: I0123 18:11:46.913049 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt6bg\" (UniqueName: \"kubernetes.io/projected/48255b18-be57-4299-8651-104f694b9299-kube-api-access-jt6bg\") pod \"crc-debug-tn6rb\" (UID: \"48255b18-be57-4299-8651-104f694b9299\") " pod="openshift-must-gather-mk2dv/crc-debug-tn6rb" Jan 23 18:11:46 crc kubenswrapper[4718]: I0123 18:11:46.913364 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48255b18-be57-4299-8651-104f694b9299-host\") pod \"crc-debug-tn6rb\" (UID: \"48255b18-be57-4299-8651-104f694b9299\") " pod="openshift-must-gather-mk2dv/crc-debug-tn6rb" Jan 23 18:11:46 crc kubenswrapper[4718]: I0123 18:11:46.913487 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48255b18-be57-4299-8651-104f694b9299-host\") pod \"crc-debug-tn6rb\" (UID: \"48255b18-be57-4299-8651-104f694b9299\") " pod="openshift-must-gather-mk2dv/crc-debug-tn6rb" Jan 23 18:11:46 crc kubenswrapper[4718]: I0123 18:11:46.932988 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt6bg\" (UniqueName: \"kubernetes.io/projected/48255b18-be57-4299-8651-104f694b9299-kube-api-access-jt6bg\") pod \"crc-debug-tn6rb\" (UID: \"48255b18-be57-4299-8651-104f694b9299\") " pod="openshift-must-gather-mk2dv/crc-debug-tn6rb" Jan 23 18:11:47 crc kubenswrapper[4718]: I0123 18:11:47.067253 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mk2dv/crc-debug-tn6rb" Jan 23 18:11:47 crc kubenswrapper[4718]: I0123 18:11:47.157034 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad34044-4bdc-4f3e-b311-f91e27b3c43d" path="/var/lib/kubelet/pods/aad34044-4bdc-4f3e-b311-f91e27b3c43d/volumes" Jan 23 18:11:48 crc kubenswrapper[4718]: I0123 18:11:48.059515 4718 generic.go:334] "Generic (PLEG): container finished" podID="48255b18-be57-4299-8651-104f694b9299" containerID="ccd26d2a6aa1e32879cd3591ef3b6c362d664ab7de9aa38b32baf5dbdb40694f" exitCode=0 Jan 23 18:11:48 crc kubenswrapper[4718]: I0123 18:11:48.059828 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mk2dv/crc-debug-tn6rb" event={"ID":"48255b18-be57-4299-8651-104f694b9299","Type":"ContainerDied","Data":"ccd26d2a6aa1e32879cd3591ef3b6c362d664ab7de9aa38b32baf5dbdb40694f"} Jan 23 18:11:48 crc kubenswrapper[4718]: I0123 18:11:48.059867 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mk2dv/crc-debug-tn6rb" event={"ID":"48255b18-be57-4299-8651-104f694b9299","Type":"ContainerStarted","Data":"065bdc48646c046b12fb3149070d9bf605c39e7b06c4bf1e1e3cde81ff776dc8"} Jan 23 18:11:48 crc kubenswrapper[4718]: I0123 18:11:48.115071 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mk2dv/crc-debug-tn6rb"] Jan 23 18:11:48 crc kubenswrapper[4718]: I0123 18:11:48.132312 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mk2dv/crc-debug-tn6rb"] Jan 23 18:11:49 crc kubenswrapper[4718]: I0123 18:11:49.202982 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mk2dv/crc-debug-tn6rb" Jan 23 18:11:49 crc kubenswrapper[4718]: I0123 18:11:49.273990 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48255b18-be57-4299-8651-104f694b9299-host\") pod \"48255b18-be57-4299-8651-104f694b9299\" (UID: \"48255b18-be57-4299-8651-104f694b9299\") " Jan 23 18:11:49 crc kubenswrapper[4718]: I0123 18:11:49.274304 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt6bg\" (UniqueName: \"kubernetes.io/projected/48255b18-be57-4299-8651-104f694b9299-kube-api-access-jt6bg\") pod \"48255b18-be57-4299-8651-104f694b9299\" (UID: \"48255b18-be57-4299-8651-104f694b9299\") " Jan 23 18:11:49 crc kubenswrapper[4718]: I0123 18:11:49.274167 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48255b18-be57-4299-8651-104f694b9299-host" (OuterVolumeSpecName: "host") pod "48255b18-be57-4299-8651-104f694b9299" (UID: "48255b18-be57-4299-8651-104f694b9299"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:11:49 crc kubenswrapper[4718]: I0123 18:11:49.275225 4718 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48255b18-be57-4299-8651-104f694b9299-host\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:49 crc kubenswrapper[4718]: I0123 18:11:49.293000 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48255b18-be57-4299-8651-104f694b9299-kube-api-access-jt6bg" (OuterVolumeSpecName: "kube-api-access-jt6bg") pod "48255b18-be57-4299-8651-104f694b9299" (UID: "48255b18-be57-4299-8651-104f694b9299"). InnerVolumeSpecName "kube-api-access-jt6bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:11:49 crc kubenswrapper[4718]: I0123 18:11:49.378100 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt6bg\" (UniqueName: \"kubernetes.io/projected/48255b18-be57-4299-8651-104f694b9299-kube-api-access-jt6bg\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:50 crc kubenswrapper[4718]: I0123 18:11:50.090763 4718 scope.go:117] "RemoveContainer" containerID="ccd26d2a6aa1e32879cd3591ef3b6c362d664ab7de9aa38b32baf5dbdb40694f" Jan 23 18:11:50 crc kubenswrapper[4718]: I0123 18:11:50.090890 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mk2dv/crc-debug-tn6rb" Jan 23 18:11:51 crc kubenswrapper[4718]: I0123 18:11:51.146254 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:11:51 crc kubenswrapper[4718]: E0123 18:11:51.146761 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:11:51 crc kubenswrapper[4718]: I0123 18:11:51.171063 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48255b18-be57-4299-8651-104f694b9299" path="/var/lib/kubelet/pods/48255b18-be57-4299-8651-104f694b9299/volumes" Jan 23 18:12:06 crc kubenswrapper[4718]: I0123 18:12:06.140382 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:12:06 crc kubenswrapper[4718]: E0123 18:12:06.141344 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:12:17 crc kubenswrapper[4718]: I0123 18:12:17.141593 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:12:17 crc kubenswrapper[4718]: E0123 18:12:17.142990 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:12:23 crc kubenswrapper[4718]: I0123 18:12:23.238648 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-ps7vw" podUID="a2118990-95a4-4a61-8c6a-3a72bdea8642" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:12:25 crc kubenswrapper[4718]: I0123 18:12:25.478504 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_cd9c60ee-d51d-4435-889b-870662f44dd6/aodh-api/0.log" Jan 23 18:12:25 crc kubenswrapper[4718]: I0123 18:12:25.654773 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_cd9c60ee-d51d-4435-889b-870662f44dd6/aodh-evaluator/0.log" Jan 23 18:12:25 crc kubenswrapper[4718]: I0123 18:12:25.666503 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_cd9c60ee-d51d-4435-889b-870662f44dd6/aodh-listener/0.log" Jan 23 18:12:25 crc kubenswrapper[4718]: I0123 18:12:25.703656 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_cd9c60ee-d51d-4435-889b-870662f44dd6/aodh-notifier/0.log" Jan 23 18:12:25 crc kubenswrapper[4718]: I0123 18:12:25.838877 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b4d87654d-p9q2p_d3d50a24-2b4e-43eb-ac1a-2807554f0989/barbican-api/0.log" Jan 23 18:12:25 crc kubenswrapper[4718]: I0123 18:12:25.870936 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b4d87654d-p9q2p_d3d50a24-2b4e-43eb-ac1a-2807554f0989/barbican-api-log/0.log" Jan 23 18:12:26 crc kubenswrapper[4718]: I0123 18:12:26.001283 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-655bd97bbb-6cj47_b146c37c-0473-4db8-a743-72a7576edf59/barbican-keystone-listener/0.log" Jan 23 18:12:26 crc kubenswrapper[4718]: I0123 18:12:26.142187 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-655bd97bbb-6cj47_b146c37c-0473-4db8-a743-72a7576edf59/barbican-keystone-listener-log/0.log" Jan 23 18:12:26 crc kubenswrapper[4718]: I0123 18:12:26.169191 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8474589d6c-tbnqc_12450ab0-8804-4354-83ff-47ca9b58bcec/barbican-worker/0.log" Jan 23 18:12:26 crc kubenswrapper[4718]: I0123 18:12:26.246558 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8474589d6c-tbnqc_12450ab0-8804-4354-83ff-47ca9b58bcec/barbican-worker-log/0.log" Jan 23 18:12:26 crc kubenswrapper[4718]: I0123 18:12:26.342781 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-jlnp8_d78674b0-cdd9-4a34-a2d0-b9eece735396/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:12:26 crc kubenswrapper[4718]: I0123 18:12:26.488120 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_764a9a7e-61b2-4513-8f87-fc357857c90f/ceilometer-central-agent/1.log" Jan 23 18:12:26 crc kubenswrapper[4718]: I0123 18:12:26.610445 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_764a9a7e-61b2-4513-8f87-fc357857c90f/ceilometer-central-agent/0.log" Jan 23 18:12:26 crc kubenswrapper[4718]: I0123 18:12:26.661036 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_764a9a7e-61b2-4513-8f87-fc357857c90f/ceilometer-notification-agent/0.log" Jan 23 18:12:26 crc kubenswrapper[4718]: I0123 18:12:26.668867 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_764a9a7e-61b2-4513-8f87-fc357857c90f/proxy-httpd/0.log" Jan 23 18:12:26 crc kubenswrapper[4718]: I0123 18:12:26.692624 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_764a9a7e-61b2-4513-8f87-fc357857c90f/sg-core/0.log" Jan 23 18:12:26 crc kubenswrapper[4718]: I0123 18:12:26.910739 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9fdab71e-08c8-4269-a9dd-69b152751e4d/cinder-api-log/0.log" Jan 23 18:12:26 crc kubenswrapper[4718]: I0123 18:12:26.954708 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9fdab71e-08c8-4269-a9dd-69b152751e4d/cinder-api/0.log" Jan 23 18:12:27 crc kubenswrapper[4718]: I0123 18:12:27.129715 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f1c0f246-5016-4f2f-94a8-5805981faffc/cinder-scheduler/0.log" Jan 23 18:12:27 crc kubenswrapper[4718]: I0123 18:12:27.222252 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f1c0f246-5016-4f2f-94a8-5805981faffc/probe/0.log" Jan 23 18:12:27 crc kubenswrapper[4718]: I0123 18:12:27.243122 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-s9v79_a410865c-527d-4070-8dcd-d4ef16f73c82/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:12:27 crc kubenswrapper[4718]: I0123 18:12:27.429208 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-tl8rg_2fcefafb-b44e-4b47-a2f1-302f824b0dd5/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:12:27 crc kubenswrapper[4718]: I0123 18:12:27.458519 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-4pkvj_69dc82c8-1e85-459e-9580-cbc33c567be5/init/0.log" Jan 23 18:12:27 crc kubenswrapper[4718]: I0123 18:12:27.641924 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-4pkvj_69dc82c8-1e85-459e-9580-cbc33c567be5/init/0.log" Jan 23 18:12:27 crc kubenswrapper[4718]: I0123 18:12:27.677279 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-qzvt6_c94d0e96-185e-4f09-bb48-9fb2e6874fec/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:12:27 crc kubenswrapper[4718]: I0123 18:12:27.797448 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-4pkvj_69dc82c8-1e85-459e-9580-cbc33c567be5/dnsmasq-dns/0.log" Jan 23 18:12:28 crc kubenswrapper[4718]: I0123 18:12:28.141095 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:12:28 crc kubenswrapper[4718]: E0123 18:12:28.141373 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:12:28 crc kubenswrapper[4718]: I0123 18:12:28.149025 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3948da45-04b4-4a32-b5d5-0701d87095a7/glance-httpd/0.log" Jan 23 18:12:28 crc kubenswrapper[4718]: I0123 18:12:28.219905 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3948da45-04b4-4a32-b5d5-0701d87095a7/glance-log/0.log" Jan 23 18:12:28 crc kubenswrapper[4718]: I0123 18:12:28.361116 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_47efd469-ac22-42d8-bb00-fd20450c9e7e/glance-log/0.log" Jan 23 18:12:28 crc kubenswrapper[4718]: I0123 18:12:28.446341 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_47efd469-ac22-42d8-bb00-fd20450c9e7e/glance-httpd/0.log" Jan 23 18:12:29 crc kubenswrapper[4718]: I0123 18:12:29.493138 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-k98h8_e4ba8316-551c-484b-b458-1feab6b0e72b/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:12:29 crc kubenswrapper[4718]: I0123 18:12:29.896941 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-kp4hx_cec28e23-37c2-4a27-872d-40cb7ad130c5/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:12:29 crc kubenswrapper[4718]: I0123 18:12:29.921429 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-6b6465d99d-xv658_6bb4cb4d-9614-4570-a061-73f87bc9a159/heat-engine/0.log" Jan 23 18:12:29 crc kubenswrapper[4718]: I0123 18:12:29.923040 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-95db6b64d-5qj7l_6e6107cd-49bf-4f98-a70b-715fcdcc1535/heat-api/0.log" Jan 23 18:12:30 crc kubenswrapper[4718]: I0123 18:12:30.016890 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-6bf6f4bd98-77tgt_7f75fda9-2f08-4ec3-a7c9-6d7103f4f4e8/heat-cfnapi/0.log" Jan 23 18:12:30 crc kubenswrapper[4718]: I0123 18:12:30.142350 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29486521-72c9v_c16bee66-a1b2-451c-81b8-d33bba2aae01/keystone-cron/0.log" Jan 23 18:12:30 crc kubenswrapper[4718]: I0123 18:12:30.172900 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29486461-g5844_5f29f7cb-356b-4f33-a5db-2b6977793db4/keystone-cron/0.log" Jan 23 18:12:30 crc kubenswrapper[4718]: I0123 18:12:30.432760 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_8ee3d2a0-3f10-40d9-980c-deb1bc35b613/kube-state-metrics/0.log" Jan 23 18:12:30 crc kubenswrapper[4718]: I0123 18:12:30.601365 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-j5tsv_680f27a4-945b-4f46-ae19-c0b05b6f3d4c/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:12:30 crc kubenswrapper[4718]: I0123 18:12:30.693074 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-pbkhz_86526a30-7eef-4621-944a-cab9bd64903b/logging-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:12:30 crc kubenswrapper[4718]: I0123 18:12:30.824947 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-f6b6f5fd7-qpbqz_e836bdf5-8379-4f60-8dbe-7be5381ed922/keystone-api/0.log" Jan 23 18:12:30 crc kubenswrapper[4718]: I0123 18:12:30.890613 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_fa116646-6ee2-42f2-8a0f-56459516d495/mysqld-exporter/0.log" Jan 23 18:12:31 crc kubenswrapper[4718]: I0123 18:12:31.309967 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8567b78dd5-chd6w_5c367121-318c-413c-96e5-f53a105d91d3/neutron-httpd/0.log" Jan 23 18:12:31 crc kubenswrapper[4718]: I0123 18:12:31.339583 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8567b78dd5-chd6w_5c367121-318c-413c-96e5-f53a105d91d3/neutron-api/0.log" Jan 23 18:12:31 crc kubenswrapper[4718]: I0123 18:12:31.369006 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-xhs7w_90730357-1c99-420b-8ff6-f82638fbd43f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:12:32 crc kubenswrapper[4718]: I0123 18:12:32.109994 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_0ac70adf-8253-4b66-91b9-beb3c28648d5/nova-cell0-conductor-conductor/0.log" Jan 23 18:12:32 crc kubenswrapper[4718]: I0123 18:12:32.291367 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_353b7e73-13e9-4989-8f55-5dedebe8e92a/nova-api-log/0.log" Jan 23 18:12:32 crc kubenswrapper[4718]: I0123 18:12:32.401667 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_7057072a-eda4-442e-9cb6-b9c2dbaebe3d/nova-cell1-conductor-conductor/0.log" Jan 23 18:12:32 crc kubenswrapper[4718]: I0123 18:12:32.731908 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-zq8vl_7354170c-e5c6-4c6e-be23-d2c6bd685aa0/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:12:32 crc kubenswrapper[4718]: I0123 18:12:32.740200 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_2ec50566-57bf-4ddf-aa36-4dfe1fa36d07/nova-cell1-novncproxy-novncproxy/0.log" Jan 23 18:12:33 crc kubenswrapper[4718]: I0123 18:12:33.033739 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_82c5d1a7-2493-4399-9a20-247f71a1c754/nova-metadata-log/0.log" Jan 23 18:12:33 crc kubenswrapper[4718]: I0123 18:12:33.079501 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_353b7e73-13e9-4989-8f55-5dedebe8e92a/nova-api-api/0.log" Jan 23 18:12:33 crc kubenswrapper[4718]: I0123 18:12:33.384685 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_91543550-f764-468a-a1e1-980e3d08aa41/mysql-bootstrap/0.log" Jan 23 18:12:33 crc kubenswrapper[4718]: I0123 18:12:33.505322 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f249685b-e052-4a6c-b34e-28fa3fe0610a/nova-scheduler-scheduler/0.log" Jan 23 18:12:33 crc kubenswrapper[4718]: I0123 18:12:33.602932 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_91543550-f764-468a-a1e1-980e3d08aa41/galera/0.log" Jan 23 18:12:33 crc kubenswrapper[4718]: I0123 18:12:33.641985 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_91543550-f764-468a-a1e1-980e3d08aa41/mysql-bootstrap/0.log" Jan 23 18:12:33 crc kubenswrapper[4718]: I0123 18:12:33.860448 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_592a76d3-742f-47a0-9054-309fb2670fa3/mysql-bootstrap/0.log" Jan 23 18:12:34 crc kubenswrapper[4718]: I0123 18:12:34.098094 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_592a76d3-742f-47a0-9054-309fb2670fa3/mysql-bootstrap/0.log" Jan 23 18:12:34 crc kubenswrapper[4718]: I0123 18:12:34.117872 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_592a76d3-742f-47a0-9054-309fb2670fa3/galera/0.log" Jan 23 18:12:34 crc kubenswrapper[4718]: I0123 18:12:34.279725 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_ed1d44ad-8796-452b-a194-17b351fc8c01/openstackclient/0.log" Jan 23 18:12:34 crc kubenswrapper[4718]: I0123 18:12:34.394070 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-c9rfg_d2e5ad0b-04cf-49a9-badc-9e3184385c5b/ovn-controller/0.log" Jan 23 18:12:34 crc kubenswrapper[4718]: I0123 18:12:34.604129 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mwsph_760d2aa5-6dd3-43b1-8447-b1d1e655ee14/openstack-network-exporter/0.log" Jan 23 18:12:34 crc kubenswrapper[4718]: I0123 18:12:34.734232 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f2xs2_a6481072-75b7-4b67-94a4-94041ef225f6/ovsdb-server-init/0.log" Jan 23 18:12:34 crc kubenswrapper[4718]: I0123 18:12:34.933747 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f2xs2_a6481072-75b7-4b67-94a4-94041ef225f6/ovs-vswitchd/0.log" Jan 23 18:12:34 crc kubenswrapper[4718]: I0123 18:12:34.951869 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f2xs2_a6481072-75b7-4b67-94a4-94041ef225f6/ovsdb-server-init/0.log" Jan 23 18:12:34 crc kubenswrapper[4718]: I0123 18:12:34.963730 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f2xs2_a6481072-75b7-4b67-94a4-94041ef225f6/ovsdb-server/0.log" Jan 23 18:12:35 crc kubenswrapper[4718]: I0123 18:12:35.203564 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-2lgn4_3c8cfb53-9d77-472b-a67e-cfe479ef8aa3/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:12:35 crc kubenswrapper[4718]: I0123 18:12:35.415784 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_873a6275-b8f2-4554-9c4d-f44a6629111d/ovn-northd/0.log" Jan 23 18:12:35 crc kubenswrapper[4718]: I0123 18:12:35.417401 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_873a6275-b8f2-4554-9c4d-f44a6629111d/openstack-network-exporter/0.log" Jan 23 18:12:35 crc kubenswrapper[4718]: I0123 18:12:35.654390 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cf53eabe-609c-471c-ae7e-ca9fb950f86e/openstack-network-exporter/0.log" Jan 23 18:12:35 crc kubenswrapper[4718]: I0123 18:12:35.699195 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cf53eabe-609c-471c-ae7e-ca9fb950f86e/ovsdbserver-nb/0.log" Jan 23 18:12:35 crc kubenswrapper[4718]: I0123 18:12:35.851250 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_82c5d1a7-2493-4399-9a20-247f71a1c754/nova-metadata-metadata/0.log" Jan 23 18:12:35 crc kubenswrapper[4718]: I0123 18:12:35.879156 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_28af9db0-905d-46cc-8ab9-887e0f58ee9b/openstack-network-exporter/0.log" Jan 23 18:12:35 crc kubenswrapper[4718]: I0123 18:12:35.934282 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_28af9db0-905d-46cc-8ab9-887e0f58ee9b/ovsdbserver-sb/0.log" Jan 23 18:12:36 crc kubenswrapper[4718]: I0123 18:12:36.191849 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6dffc5fb8-5997w_4878ef2b-0a67-424e-95b7-53803746d9f3/placement-api/0.log" Jan 23 18:12:36 crc kubenswrapper[4718]: I0123 18:12:36.321565 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9a386a29-4af1-4f01-ac73-771210f5a97f/init-config-reloader/0.log" Jan 23 18:12:36 crc kubenswrapper[4718]: I0123 18:12:36.346537 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6dffc5fb8-5997w_4878ef2b-0a67-424e-95b7-53803746d9f3/placement-log/0.log" Jan 23 18:12:36 crc kubenswrapper[4718]: I0123 18:12:36.502730 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9a386a29-4af1-4f01-ac73-771210f5a97f/prometheus/0.log" Jan 23 18:12:36 crc kubenswrapper[4718]: I0123 18:12:36.503674 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9a386a29-4af1-4f01-ac73-771210f5a97f/config-reloader/0.log" Jan 23 18:12:36 crc kubenswrapper[4718]: I0123 18:12:36.547678 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9a386a29-4af1-4f01-ac73-771210f5a97f/init-config-reloader/0.log" Jan 23 18:12:36 crc kubenswrapper[4718]: I0123 18:12:36.600590 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9a386a29-4af1-4f01-ac73-771210f5a97f/thanos-sidecar/0.log" Jan 23 18:12:36 crc kubenswrapper[4718]: I0123 18:12:36.734255 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bca04db1-8e77-405e-b8ef-656cf882136c/setup-container/0.log" Jan 23 18:12:37 crc kubenswrapper[4718]: I0123 18:12:37.001012 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6854d6fc-92af-4083-a2e2-2f41dd9d2a73/setup-container/0.log" Jan 23 18:12:37 crc kubenswrapper[4718]: I0123 18:12:37.031648 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bca04db1-8e77-405e-b8ef-656cf882136c/setup-container/0.log" Jan 23 18:12:37 crc kubenswrapper[4718]: I0123 18:12:37.057624 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bca04db1-8e77-405e-b8ef-656cf882136c/rabbitmq/0.log" Jan 23 18:12:37 crc kubenswrapper[4718]: I0123 18:12:37.243116 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6854d6fc-92af-4083-a2e2-2f41dd9d2a73/setup-container/0.log" Jan 23 18:12:37 crc kubenswrapper[4718]: I0123 18:12:37.335290 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_6e86bb18-ca80-49f5-9de6-46737ff29374/setup-container/0.log" Jan 23 18:12:37 crc kubenswrapper[4718]: I0123 18:12:37.343128 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6854d6fc-92af-4083-a2e2-2f41dd9d2a73/rabbitmq/0.log" Jan 23 18:12:37 crc kubenswrapper[4718]: I0123 18:12:37.528383 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_6e86bb18-ca80-49f5-9de6-46737ff29374/setup-container/0.log" Jan 23 18:12:37 crc kubenswrapper[4718]: I0123 18:12:37.649993 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_6e86bb18-ca80-49f5-9de6-46737ff29374/rabbitmq/0.log" Jan 23 18:12:37 crc kubenswrapper[4718]: I0123 18:12:37.654157 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_d346ed1b-38d4-4c87-82f6-78ec3880c670/setup-container/0.log" Jan 23 18:12:37 crc kubenswrapper[4718]: I0123 18:12:37.916496 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_d346ed1b-38d4-4c87-82f6-78ec3880c670/setup-container/0.log" Jan 23 18:12:37 crc kubenswrapper[4718]: I0123 18:12:37.943055 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-wpdsh_ecb3058c-dfcb-4950-8c2a-3dba0200135f/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:12:38 crc kubenswrapper[4718]: I0123 18:12:38.000522 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_d346ed1b-38d4-4c87-82f6-78ec3880c670/rabbitmq/0.log" Jan 23 18:12:38 crc kubenswrapper[4718]: I0123 18:12:38.103204 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xcm92_d95287f3-d510-4991-bde5-94259e7c64d4/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:12:38 crc kubenswrapper[4718]: I0123 18:12:38.233758 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-n777k_7d9a564a-3bb5-421a-a861-721b16ae1adc/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:12:38 crc kubenswrapper[4718]: I0123 18:12:38.334385 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-cbknc_876f2274-0082-4049-a9a1-e8ed6b517b57/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:12:38 crc kubenswrapper[4718]: I0123 18:12:38.467546 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-46ztt_1c13cc81-99d0-465b-a13d-638a7482f669/ssh-known-hosts-edpm-deployment/0.log" Jan 23 18:12:38 crc kubenswrapper[4718]: I0123 18:12:38.725227 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5b6b78dc95-9ft97_6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1/proxy-server/0.log" Jan 23 18:12:38 crc kubenswrapper[4718]: I0123 18:12:38.881592 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-v2fl4_9a040b54-6ee7-446b-83f1-b6b5c211ef43/swift-ring-rebalance/0.log" Jan 23 18:12:38 crc kubenswrapper[4718]: I0123 18:12:38.926812 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5b6b78dc95-9ft97_6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1/proxy-httpd/0.log" Jan 23 18:12:38 crc kubenswrapper[4718]: I0123 18:12:38.989270 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/account-auditor/0.log" Jan 23 18:12:39 crc kubenswrapper[4718]: I0123 18:12:39.090283 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/account-reaper/0.log" Jan 23 18:12:39 crc kubenswrapper[4718]: I0123 18:12:39.228176 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/account-server/0.log" Jan 23 18:12:39 crc kubenswrapper[4718]: I0123 18:12:39.271021 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/account-replicator/0.log" Jan 23 18:12:39 crc kubenswrapper[4718]: I0123 18:12:39.316484 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/container-auditor/0.log" Jan 23 18:12:39 crc kubenswrapper[4718]: I0123 18:12:39.430184 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/container-replicator/0.log" Jan 23 18:12:39 crc kubenswrapper[4718]: I0123 18:12:39.487593 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/container-server/0.log" Jan 23 18:12:39 crc kubenswrapper[4718]: I0123 18:12:39.514086 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/container-updater/0.log" Jan 23 18:12:39 crc kubenswrapper[4718]: I0123 18:12:39.571384 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/object-auditor/0.log" Jan 23 18:12:39 crc kubenswrapper[4718]: I0123 18:12:39.709085 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/object-expirer/0.log" Jan 23 18:12:39 crc kubenswrapper[4718]: I0123 18:12:39.716929 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/object-replicator/0.log" Jan 23 18:12:39 crc kubenswrapper[4718]: I0123 18:12:39.754455 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/object-server/0.log" Jan 23 18:12:39 crc kubenswrapper[4718]: I0123 18:12:39.807874 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/object-updater/0.log" Jan 23 18:12:39 crc kubenswrapper[4718]: I0123 18:12:39.914911 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/swift-recon-cron/0.log" Jan 23 18:12:39 crc kubenswrapper[4718]: I0123 18:12:39.948650 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3383bbd9-d755-435c-9d57-c66c5cadaf09/rsync/0.log" Jan 23 18:12:40 crc kubenswrapper[4718]: I0123 18:12:40.105229 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-5gq77_708430f1-d1c7-46ef-9e2c-9077a85c95fb/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:12:40 crc kubenswrapper[4718]: I0123 18:12:40.223444 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-hbvkm_7775dfb4-42b6-411d-8dc1-efe8daad5960/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:12:40 crc kubenswrapper[4718]: I0123 18:12:40.573069 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_68ee45e3-d5d8-4579-85c7-d46064d4ac6c/test-operator-logs-container/0.log" Jan 23 18:12:40 crc kubenswrapper[4718]: I0123 18:12:40.826422 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-7jmll_25a0278c-a9ab-4c21-af05-e4fed25e299d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:12:41 crc kubenswrapper[4718]: I0123 18:12:41.302031 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_21700319-2dc7-41c4-8377-8ba6ef629cbb/tempest-tests-tempest-tests-runner/0.log" Jan 23 18:12:42 crc kubenswrapper[4718]: I0123 18:12:42.146834 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:12:42 crc kubenswrapper[4718]: E0123 18:12:42.147416 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:12:52 crc kubenswrapper[4718]: I0123 18:12:52.133484 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ffd9d6ca-1e5b-4102-8b5b-664ebd967619/memcached/0.log" Jan 23 18:12:55 crc kubenswrapper[4718]: I0123 18:12:55.141341 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:12:55 crc kubenswrapper[4718]: E0123 18:12:55.142047 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:13:07 crc kubenswrapper[4718]: I0123 18:13:07.141522 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:13:07 crc kubenswrapper[4718]: E0123 18:13:07.142352 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:13:08 crc kubenswrapper[4718]: I0123 18:13:08.244700 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-62wgc_858bcd70-b537-4da9-8ca9-27c1724ece99/manager/1.log" Jan 23 18:13:08 crc kubenswrapper[4718]: I0123 18:13:08.422787 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-62wgc_858bcd70-b537-4da9-8ca9-27c1724ece99/manager/0.log" Jan 23 18:13:08 crc kubenswrapper[4718]: I0123 18:13:08.543147 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-9r22c_a6012879-2e20-485d-829f-3a9ec3e5bcb1/manager/1.log" Jan 23 18:13:08 crc kubenswrapper[4718]: I0123 18:13:08.943725 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-9r22c_a6012879-2e20-485d-829f-3a9ec3e5bcb1/manager/0.log" Jan 23 18:13:09 crc kubenswrapper[4718]: I0123 18:13:09.128786 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-zs7zk_8d9099e2-7f4f-42d8-8e76-d2d8347a1514/manager/1.log" Jan 23 18:13:09 crc kubenswrapper[4718]: I0123 18:13:09.248290 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-zs7zk_8d9099e2-7f4f-42d8-8e76-d2d8347a1514/manager/0.log" Jan 23 18:13:09 crc kubenswrapper[4718]: I0123 18:13:09.286750 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s_c465f3f4-76f9-4a34-be6d-1bef61e77c8f/util/0.log" Jan 23 18:13:09 crc kubenswrapper[4718]: I0123 18:13:09.537815 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s_c465f3f4-76f9-4a34-be6d-1bef61e77c8f/util/0.log" Jan 23 18:13:09 crc kubenswrapper[4718]: I0123 18:13:09.610812 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s_c465f3f4-76f9-4a34-be6d-1bef61e77c8f/pull/0.log" Jan 23 18:13:09 crc kubenswrapper[4718]: I0123 18:13:09.646027 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s_c465f3f4-76f9-4a34-be6d-1bef61e77c8f/pull/0.log" Jan 23 18:13:09 crc kubenswrapper[4718]: I0123 18:13:09.802285 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s_c465f3f4-76f9-4a34-be6d-1bef61e77c8f/pull/0.log" Jan 23 18:13:09 crc kubenswrapper[4718]: I0123 18:13:09.817450 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s_c465f3f4-76f9-4a34-be6d-1bef61e77c8f/util/0.log" Jan 23 18:13:09 crc kubenswrapper[4718]: I0123 18:13:09.863042 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ef164afdb418ac9a153fc7dcf55e43a5580d2c285754073e6006c38cc69vg2s_c465f3f4-76f9-4a34-be6d-1bef61e77c8f/extract/0.log" Jan 23 18:13:10 crc kubenswrapper[4718]: I0123 18:13:10.057864 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-jjplg_9e8950bc-8213-40eb-9bb7-2e1a8c66b57b/manager/1.log" Jan 23 18:13:10 crc kubenswrapper[4718]: I0123 18:13:10.118147 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-jjplg_9e8950bc-8213-40eb-9bb7-2e1a8c66b57b/manager/0.log" Jan 23 18:13:10 crc kubenswrapper[4718]: I0123 18:13:10.161439 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-dfwk2_d91ca0c9-05fb-4e8b-9581-f2c3d025c0e2/manager/1.log" Jan 23 18:13:10 crc kubenswrapper[4718]: I0123 18:13:10.345952 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-dfwk2_d91ca0c9-05fb-4e8b-9581-f2c3d025c0e2/manager/0.log" Jan 23 18:13:10 crc kubenswrapper[4718]: I0123 18:13:10.443260 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-sr2hw_d869ec7c-ddd9-4e17-9154-a793539a2a00/manager/1.log" Jan 23 18:13:10 crc kubenswrapper[4718]: I0123 18:13:10.451680 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-sr2hw_d869ec7c-ddd9-4e17-9154-a793539a2a00/manager/0.log" Jan 23 18:13:10 crc kubenswrapper[4718]: I0123 18:13:10.900385 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-4tm4n_06df7a47-9233-4957-936e-27f58aeb0000/manager/1.log" Jan 23 18:13:11 crc kubenswrapper[4718]: I0123 18:13:11.077287 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-t8fsk_16e17ade-97be-48d4-83d4-7ac385174edd/manager/1.log" Jan 23 18:13:11 crc kubenswrapper[4718]: I0123 18:13:11.220783 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-t8fsk_16e17ade-97be-48d4-83d4-7ac385174edd/manager/0.log" Jan 23 18:13:11 crc kubenswrapper[4718]: I0123 18:13:11.240027 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-4tm4n_06df7a47-9233-4957-936e-27f58aeb0000/manager/0.log" Jan 23 18:13:11 crc kubenswrapper[4718]: I0123 18:13:11.366498 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-nwpcs_50178034-67cf-4f8d-89bb-788c8a73a72a/manager/1.log" Jan 23 18:13:11 crc kubenswrapper[4718]: I0123 18:13:11.573082 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-nwpcs_50178034-67cf-4f8d-89bb-788c8a73a72a/manager/0.log" Jan 23 18:13:11 crc kubenswrapper[4718]: I0123 18:13:11.588586 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-jbxnk_32d58a3a-df31-492e-a2c2-2f5ca31c5f90/manager/1.log" Jan 23 18:13:11 crc kubenswrapper[4718]: I0123 18:13:11.594194 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-jbxnk_32d58a3a-df31-492e-a2c2-2f5ca31c5f90/manager/0.log" Jan 23 18:13:11 crc kubenswrapper[4718]: I0123 18:13:11.788555 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l_9a95eff5-116c-4141-bee6-5bda12f21e11/manager/1.log" Jan 23 18:13:11 crc kubenswrapper[4718]: I0123 18:13:11.839003 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-4bx8l_9a95eff5-116c-4141-bee6-5bda12f21e11/manager/0.log" Jan 23 18:13:12 crc kubenswrapper[4718]: I0123 18:13:12.050612 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-sr4hx_8e29e3d6-21d7-4a1a-832e-f831d884fd00/manager/1.log" Jan 23 18:13:12 crc kubenswrapper[4718]: I0123 18:13:12.070725 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-sr4hx_8e29e3d6-21d7-4a1a-832e-f831d884fd00/manager/0.log" Jan 23 18:13:12 crc kubenswrapper[4718]: I0123 18:13:12.153312 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-2m5hx_ae7c1f40-90dd-441b-9dc5-608e1a503f4c/manager/1.log" Jan 23 18:13:12 crc kubenswrapper[4718]: I0123 18:13:12.381003 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-2m5hx_ae7c1f40-90dd-441b-9dc5-608e1a503f4c/manager/0.log" Jan 23 18:13:12 crc kubenswrapper[4718]: I0123 18:13:12.406126 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-kn2t8_2062a379-6201-4835-8974-24befcfbf8e0/manager/1.log" Jan 23 18:13:12 crc kubenswrapper[4718]: I0123 18:13:12.427050 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-kn2t8_2062a379-6201-4835-8974-24befcfbf8e0/manager/0.log" Jan 23 18:13:12 crc kubenswrapper[4718]: I0123 18:13:12.585858 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854bhf98_18395392-bb8d-49be-9b49-950d6f32b9f6/manager/1.log" Jan 23 18:13:12 crc kubenswrapper[4718]: I0123 18:13:12.642523 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854bhf98_18395392-bb8d-49be-9b49-950d6f32b9f6/manager/0.log" Jan 23 18:13:12 crc kubenswrapper[4718]: I0123 18:13:12.846153 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7549f75f-929gl_0c42f381-34a5-4913-90b0-0bbc4e0810fd/operator/1.log" Jan 23 18:13:12 crc kubenswrapper[4718]: I0123 18:13:12.964967 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7549f75f-929gl_0c42f381-34a5-4913-90b0-0bbc4e0810fd/operator/0.log" Jan 23 18:13:12 crc kubenswrapper[4718]: I0123 18:13:12.991455 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-74c6db8f6f-rkhth_369053b2-11b0-4e19-a77d-3ea9cf595039/manager/1.log" Jan 23 18:13:13 crc kubenswrapper[4718]: I0123 18:13:13.337277 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-zvx6t_f519ad69-0e68-44c6-9805-40fb66819268/registry-server/0.log" Jan 23 18:13:13 crc kubenswrapper[4718]: I0123 18:13:13.471229 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-znjjw_0fe9ca7e-5763-4cba-afc1-94065f21f33e/manager/1.log" Jan 23 18:13:13 crc kubenswrapper[4718]: I0123 18:13:13.632888 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-9kl82_49cc2143-a384-436e-8eef-4d7474918177/manager/1.log" Jan 23 18:13:13 crc kubenswrapper[4718]: I0123 18:13:13.693233 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-znjjw_0fe9ca7e-5763-4cba-afc1-94065f21f33e/manager/0.log" Jan 23 18:13:13 crc kubenswrapper[4718]: I0123 18:13:13.867567 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-9kl82_49cc2143-a384-436e-8eef-4d7474918177/manager/0.log" Jan 23 18:13:14 crc kubenswrapper[4718]: I0123 18:13:14.009803 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-64ktf_235aadec-9416-469c-8455-64dd1bc82a08/operator/1.log" Jan 23 18:13:14 crc kubenswrapper[4718]: I0123 18:13:14.036500 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-64ktf_235aadec-9416-469c-8455-64dd1bc82a08/operator/0.log" Jan 23 18:13:14 crc kubenswrapper[4718]: I0123 18:13:14.485497 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-74c6db8f6f-rkhth_369053b2-11b0-4e19-a77d-3ea9cf595039/manager/0.log" Jan 23 18:13:14 crc kubenswrapper[4718]: I0123 18:13:14.529440 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-87q6t_7cd4d741-2a88-466f-a644-a1c6c62e521b/manager/1.log" Jan 23 18:13:14 crc kubenswrapper[4718]: I0123 18:13:14.581040 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-87q6t_7cd4d741-2a88-466f-a644-a1c6c62e521b/manager/0.log" Jan 23 18:13:14 crc kubenswrapper[4718]: I0123 18:13:14.728611 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7c7754d696-xthck_f2fe0ff3-bfa2-4cc4-b85c-8bc89ca73078/manager/1.log" Jan 23 18:13:14 crc kubenswrapper[4718]: I0123 18:13:14.821607 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-9vg4k_3cfce3f5-1f59-43ae-aa99-2483cfb33806/manager/1.log" Jan 23 18:13:14 crc kubenswrapper[4718]: I0123 18:13:14.886418 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-9vg4k_3cfce3f5-1f59-43ae-aa99-2483cfb33806/manager/0.log" Jan 23 18:13:15 crc kubenswrapper[4718]: I0123 18:13:15.103555 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7c7754d696-xthck_f2fe0ff3-bfa2-4cc4-b85c-8bc89ca73078/manager/0.log" Jan 23 18:13:15 crc kubenswrapper[4718]: I0123 18:13:15.113820 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-5lxl4_addb55c8-8565-42c2-84d2-7ee7e8693a3a/manager/1.log" Jan 23 18:13:15 crc kubenswrapper[4718]: I0123 18:13:15.137139 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-5lxl4_addb55c8-8565-42c2-84d2-7ee7e8693a3a/manager/0.log" Jan 23 18:13:21 crc kubenswrapper[4718]: I0123 18:13:21.140485 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:13:21 crc kubenswrapper[4718]: E0123 18:13:21.141433 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:13:35 crc kubenswrapper[4718]: I0123 18:13:35.140555 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:13:35 crc kubenswrapper[4718]: E0123 18:13:35.141376 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:13:36 crc kubenswrapper[4718]: I0123 18:13:36.288796 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5fnzd"] Jan 23 18:13:36 crc kubenswrapper[4718]: E0123 18:13:36.289783 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48255b18-be57-4299-8651-104f694b9299" containerName="container-00" Jan 23 18:13:36 crc kubenswrapper[4718]: I0123 18:13:36.289799 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="48255b18-be57-4299-8651-104f694b9299" containerName="container-00" Jan 23 18:13:36 crc kubenswrapper[4718]: I0123 18:13:36.290133 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="48255b18-be57-4299-8651-104f694b9299" containerName="container-00" Jan 23 18:13:36 crc kubenswrapper[4718]: I0123 18:13:36.292345 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5fnzd" Jan 23 18:13:36 crc kubenswrapper[4718]: I0123 18:13:36.300366 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5fnzd"] Jan 23 18:13:36 crc kubenswrapper[4718]: I0123 18:13:36.340704 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-69wt2_7a72377e-a621-4ebb-b31a-7f405b218eb6/control-plane-machine-set-operator/0.log" Jan 23 18:13:36 crc kubenswrapper[4718]: I0123 18:13:36.379305 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2-catalog-content\") pod \"redhat-operators-5fnzd\" (UID: \"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2\") " pod="openshift-marketplace/redhat-operators-5fnzd" Jan 23 18:13:36 crc kubenswrapper[4718]: I0123 18:13:36.379584 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdmnb\" (UniqueName: \"kubernetes.io/projected/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2-kube-api-access-vdmnb\") pod \"redhat-operators-5fnzd\" (UID: \"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2\") " pod="openshift-marketplace/redhat-operators-5fnzd" Jan 23 18:13:36 crc kubenswrapper[4718]: I0123 18:13:36.380149 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2-utilities\") pod \"redhat-operators-5fnzd\" (UID: \"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2\") " pod="openshift-marketplace/redhat-operators-5fnzd" Jan 23 18:13:36 crc kubenswrapper[4718]: I0123 18:13:36.482727 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdmnb\" (UniqueName: \"kubernetes.io/projected/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2-kube-api-access-vdmnb\") pod \"redhat-operators-5fnzd\" (UID: \"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2\") " pod="openshift-marketplace/redhat-operators-5fnzd" Jan 23 18:13:36 crc kubenswrapper[4718]: I0123 18:13:36.482911 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2-utilities\") pod \"redhat-operators-5fnzd\" (UID: \"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2\") " pod="openshift-marketplace/redhat-operators-5fnzd" Jan 23 18:13:36 crc kubenswrapper[4718]: I0123 18:13:36.483028 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2-catalog-content\") pod \"redhat-operators-5fnzd\" (UID: \"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2\") " pod="openshift-marketplace/redhat-operators-5fnzd" Jan 23 18:13:36 crc kubenswrapper[4718]: I0123 18:13:36.483487 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2-utilities\") pod \"redhat-operators-5fnzd\" (UID: \"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2\") " pod="openshift-marketplace/redhat-operators-5fnzd" Jan 23 18:13:36 crc kubenswrapper[4718]: I0123 18:13:36.483534 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2-catalog-content\") pod \"redhat-operators-5fnzd\" (UID: \"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2\") " pod="openshift-marketplace/redhat-operators-5fnzd" Jan 23 18:13:36 crc kubenswrapper[4718]: I0123 18:13:36.514667 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdmnb\" (UniqueName: \"kubernetes.io/projected/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2-kube-api-access-vdmnb\") pod \"redhat-operators-5fnzd\" (UID: \"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2\") " pod="openshift-marketplace/redhat-operators-5fnzd" Jan 23 18:13:36 crc kubenswrapper[4718]: I0123 18:13:36.639047 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5fnzd" Jan 23 18:13:36 crc kubenswrapper[4718]: I0123 18:13:36.675039 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9p5vn_8ed9dcbf-5502-4797-9b65-ff900aa065d8/kube-rbac-proxy/0.log" Jan 23 18:13:36 crc kubenswrapper[4718]: I0123 18:13:36.768104 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9p5vn_8ed9dcbf-5502-4797-9b65-ff900aa065d8/machine-api-operator/0.log" Jan 23 18:13:37 crc kubenswrapper[4718]: I0123 18:13:37.301572 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5fnzd"] Jan 23 18:13:37 crc kubenswrapper[4718]: I0123 18:13:37.482779 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fnzd" event={"ID":"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2","Type":"ContainerStarted","Data":"0620192a8e9e103f243c27ae41bb42659980670eb68d821698e0d4232a13b7c7"} Jan 23 18:13:38 crc kubenswrapper[4718]: I0123 18:13:38.495453 4718 generic.go:334] "Generic (PLEG): container finished" podID="5416e38f-02ba-4fa3-a8ff-3e85c68e87d2" containerID="5cbf5e0679a0b19b4e5f4a430e58e5a0db2b75e3a85995a617a2fbdbb14670de" exitCode=0 Jan 23 18:13:38 crc kubenswrapper[4718]: I0123 18:13:38.495577 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fnzd" event={"ID":"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2","Type":"ContainerDied","Data":"5cbf5e0679a0b19b4e5f4a430e58e5a0db2b75e3a85995a617a2fbdbb14670de"} Jan 23 18:13:38 crc kubenswrapper[4718]: I0123 18:13:38.498913 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:13:40 crc kubenswrapper[4718]: I0123 18:13:40.524055 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fnzd" event={"ID":"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2","Type":"ContainerStarted","Data":"f5388ef73cfaad7104f5b3b7be73410e48fc4f59cab9ea4f9a6d2f265fcabbac"} Jan 23 18:13:42 crc kubenswrapper[4718]: I0123 18:13:42.566931 4718 generic.go:334] "Generic (PLEG): container finished" podID="5416e38f-02ba-4fa3-a8ff-3e85c68e87d2" containerID="f5388ef73cfaad7104f5b3b7be73410e48fc4f59cab9ea4f9a6d2f265fcabbac" exitCode=0 Jan 23 18:13:42 crc kubenswrapper[4718]: I0123 18:13:42.567022 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fnzd" event={"ID":"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2","Type":"ContainerDied","Data":"f5388ef73cfaad7104f5b3b7be73410e48fc4f59cab9ea4f9a6d2f265fcabbac"} Jan 23 18:13:43 crc kubenswrapper[4718]: I0123 18:13:43.580982 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fnzd" event={"ID":"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2","Type":"ContainerStarted","Data":"e8968a002f1486b2dd6c55efb46be868172ced2741bbc68b58da6420f435f507"} Jan 23 18:13:43 crc kubenswrapper[4718]: I0123 18:13:43.605731 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5fnzd" podStartSLOduration=2.887842856 podStartE2EDuration="7.605711471s" podCreationTimestamp="2026-01-23 18:13:36 +0000 UTC" firstStartedPulling="2026-01-23 18:13:38.498600333 +0000 UTC m=+7019.645842324" lastFinishedPulling="2026-01-23 18:13:43.216468948 +0000 UTC m=+7024.363710939" observedRunningTime="2026-01-23 18:13:43.601360962 +0000 UTC m=+7024.748602963" watchObservedRunningTime="2026-01-23 18:13:43.605711471 +0000 UTC m=+7024.752953472" Jan 23 18:13:46 crc kubenswrapper[4718]: I0123 18:13:46.640277 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5fnzd" Jan 23 18:13:46 crc kubenswrapper[4718]: I0123 18:13:46.640889 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5fnzd" Jan 23 18:13:47 crc kubenswrapper[4718]: I0123 18:13:47.694487 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5fnzd" podUID="5416e38f-02ba-4fa3-a8ff-3e85c68e87d2" containerName="registry-server" probeResult="failure" output=< Jan 23 18:13:47 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Jan 23 18:13:47 crc kubenswrapper[4718]: > Jan 23 18:13:49 crc kubenswrapper[4718]: I0123 18:13:49.148577 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:13:49 crc kubenswrapper[4718]: E0123 18:13:49.149274 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:13:51 crc kubenswrapper[4718]: I0123 18:13:51.312244 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-8jlw4_1e5ee60b-7363-4a74-b69d-1f4f474166e0/cert-manager-controller/0.log" Jan 23 18:13:51 crc kubenswrapper[4718]: I0123 18:13:51.321492 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-8jlw4_1e5ee60b-7363-4a74-b69d-1f4f474166e0/cert-manager-controller/1.log" Jan 23 18:13:51 crc kubenswrapper[4718]: I0123 18:13:51.513113 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-sntwx_57379fa4-b935-4095-a6c1-9e83709c5906/cert-manager-cainjector/1.log" Jan 23 18:13:51 crc kubenswrapper[4718]: I0123 18:13:51.567128 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-sntwx_57379fa4-b935-4095-a6c1-9e83709c5906/cert-manager-cainjector/0.log" Jan 23 18:13:51 crc kubenswrapper[4718]: I0123 18:13:51.744015 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-br6fl_f99e5457-16fb-453f-909c-a8364ffc0372/cert-manager-webhook/0.log" Jan 23 18:13:57 crc kubenswrapper[4718]: I0123 18:13:57.698838 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5fnzd" podUID="5416e38f-02ba-4fa3-a8ff-3e85c68e87d2" containerName="registry-server" probeResult="failure" output=< Jan 23 18:13:57 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Jan 23 18:13:57 crc kubenswrapper[4718]: > Jan 23 18:14:02 crc kubenswrapper[4718]: I0123 18:14:02.141075 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:14:02 crc kubenswrapper[4718]: I0123 18:14:02.793755 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"07e3bff21e1021455adee293793c290c5c0fec29dfe4e918d59c2e097fdf1504"} Jan 23 18:14:05 crc kubenswrapper[4718]: I0123 18:14:05.965240 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-2h982_4ff516ae-ef38-4eb8-9721-b5e809fa1a53/nmstate-console-plugin/0.log" Jan 23 18:14:06 crc kubenswrapper[4718]: I0123 18:14:06.171941 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hbmxh_c66f413f-8a00-4526-b93f-4d739aec140c/nmstate-handler/0.log" Jan 23 18:14:06 crc kubenswrapper[4718]: I0123 18:14:06.230491 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-4r42t_c9ada4d9-34eb-43fb-a0ba-09b879eab797/kube-rbac-proxy/0.log" Jan 23 18:14:06 crc kubenswrapper[4718]: I0123 18:14:06.294322 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-4r42t_c9ada4d9-34eb-43fb-a0ba-09b879eab797/nmstate-metrics/0.log" Jan 23 18:14:06 crc kubenswrapper[4718]: I0123 18:14:06.398484 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-q2n5g_1ae3f970-005a-47f5-9539-ba299ac76301/nmstate-operator/0.log" Jan 23 18:14:06 crc kubenswrapper[4718]: I0123 18:14:06.489131 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-hqbgx_9d41c1ee-b304-42c0-a2e7-2fe83315a430/nmstate-webhook/0.log" Jan 23 18:14:06 crc kubenswrapper[4718]: I0123 18:14:06.697874 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5fnzd" Jan 23 18:14:06 crc kubenswrapper[4718]: I0123 18:14:06.751822 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5fnzd" Jan 23 18:14:07 crc kubenswrapper[4718]: I0123 18:14:07.481096 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5fnzd"] Jan 23 18:14:07 crc kubenswrapper[4718]: I0123 18:14:07.861011 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5fnzd" podUID="5416e38f-02ba-4fa3-a8ff-3e85c68e87d2" containerName="registry-server" containerID="cri-o://e8968a002f1486b2dd6c55efb46be868172ced2741bbc68b58da6420f435f507" gracePeriod=2 Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.494128 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5fnzd" Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.539585 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2-utilities\") pod \"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2\" (UID: \"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2\") " Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.539929 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2-catalog-content\") pod \"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2\" (UID: \"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2\") " Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.539998 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdmnb\" (UniqueName: \"kubernetes.io/projected/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2-kube-api-access-vdmnb\") pod \"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2\" (UID: \"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2\") " Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.542881 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2-utilities" (OuterVolumeSpecName: "utilities") pod "5416e38f-02ba-4fa3-a8ff-3e85c68e87d2" (UID: "5416e38f-02ba-4fa3-a8ff-3e85c68e87d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.551480 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2-kube-api-access-vdmnb" (OuterVolumeSpecName: "kube-api-access-vdmnb") pod "5416e38f-02ba-4fa3-a8ff-3e85c68e87d2" (UID: "5416e38f-02ba-4fa3-a8ff-3e85c68e87d2"). InnerVolumeSpecName "kube-api-access-vdmnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.643251 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.643301 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdmnb\" (UniqueName: \"kubernetes.io/projected/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2-kube-api-access-vdmnb\") on node \"crc\" DevicePath \"\"" Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.688465 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5416e38f-02ba-4fa3-a8ff-3e85c68e87d2" (UID: "5416e38f-02ba-4fa3-a8ff-3e85c68e87d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.745592 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.873752 4718 generic.go:334] "Generic (PLEG): container finished" podID="5416e38f-02ba-4fa3-a8ff-3e85c68e87d2" containerID="e8968a002f1486b2dd6c55efb46be868172ced2741bbc68b58da6420f435f507" exitCode=0 Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.873820 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fnzd" event={"ID":"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2","Type":"ContainerDied","Data":"e8968a002f1486b2dd6c55efb46be868172ced2741bbc68b58da6420f435f507"} Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.873860 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fnzd" event={"ID":"5416e38f-02ba-4fa3-a8ff-3e85c68e87d2","Type":"ContainerDied","Data":"0620192a8e9e103f243c27ae41bb42659980670eb68d821698e0d4232a13b7c7"} Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.873861 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5fnzd" Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.873905 4718 scope.go:117] "RemoveContainer" containerID="e8968a002f1486b2dd6c55efb46be868172ced2741bbc68b58da6420f435f507" Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.909156 4718 scope.go:117] "RemoveContainer" containerID="f5388ef73cfaad7104f5b3b7be73410e48fc4f59cab9ea4f9a6d2f265fcabbac" Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.926702 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5fnzd"] Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.934487 4718 scope.go:117] "RemoveContainer" containerID="5cbf5e0679a0b19b4e5f4a430e58e5a0db2b75e3a85995a617a2fbdbb14670de" Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.946137 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5fnzd"] Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.988880 4718 scope.go:117] "RemoveContainer" containerID="e8968a002f1486b2dd6c55efb46be868172ced2741bbc68b58da6420f435f507" Jan 23 18:14:08 crc kubenswrapper[4718]: E0123 18:14:08.989556 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8968a002f1486b2dd6c55efb46be868172ced2741bbc68b58da6420f435f507\": container with ID starting with e8968a002f1486b2dd6c55efb46be868172ced2741bbc68b58da6420f435f507 not found: ID does not exist" containerID="e8968a002f1486b2dd6c55efb46be868172ced2741bbc68b58da6420f435f507" Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.989913 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8968a002f1486b2dd6c55efb46be868172ced2741bbc68b58da6420f435f507"} err="failed to get container status \"e8968a002f1486b2dd6c55efb46be868172ced2741bbc68b58da6420f435f507\": rpc error: code = NotFound desc = could not find container \"e8968a002f1486b2dd6c55efb46be868172ced2741bbc68b58da6420f435f507\": container with ID starting with e8968a002f1486b2dd6c55efb46be868172ced2741bbc68b58da6420f435f507 not found: ID does not exist" Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.989963 4718 scope.go:117] "RemoveContainer" containerID="f5388ef73cfaad7104f5b3b7be73410e48fc4f59cab9ea4f9a6d2f265fcabbac" Jan 23 18:14:08 crc kubenswrapper[4718]: E0123 18:14:08.991084 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5388ef73cfaad7104f5b3b7be73410e48fc4f59cab9ea4f9a6d2f265fcabbac\": container with ID starting with f5388ef73cfaad7104f5b3b7be73410e48fc4f59cab9ea4f9a6d2f265fcabbac not found: ID does not exist" containerID="f5388ef73cfaad7104f5b3b7be73410e48fc4f59cab9ea4f9a6d2f265fcabbac" Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.991124 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5388ef73cfaad7104f5b3b7be73410e48fc4f59cab9ea4f9a6d2f265fcabbac"} err="failed to get container status \"f5388ef73cfaad7104f5b3b7be73410e48fc4f59cab9ea4f9a6d2f265fcabbac\": rpc error: code = NotFound desc = could not find container \"f5388ef73cfaad7104f5b3b7be73410e48fc4f59cab9ea4f9a6d2f265fcabbac\": container with ID starting with f5388ef73cfaad7104f5b3b7be73410e48fc4f59cab9ea4f9a6d2f265fcabbac not found: ID does not exist" Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.991161 4718 scope.go:117] "RemoveContainer" containerID="5cbf5e0679a0b19b4e5f4a430e58e5a0db2b75e3a85995a617a2fbdbb14670de" Jan 23 18:14:08 crc kubenswrapper[4718]: E0123 18:14:08.992592 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cbf5e0679a0b19b4e5f4a430e58e5a0db2b75e3a85995a617a2fbdbb14670de\": container with ID starting with 5cbf5e0679a0b19b4e5f4a430e58e5a0db2b75e3a85995a617a2fbdbb14670de not found: ID does not exist" containerID="5cbf5e0679a0b19b4e5f4a430e58e5a0db2b75e3a85995a617a2fbdbb14670de" Jan 23 18:14:08 crc kubenswrapper[4718]: I0123 18:14:08.992618 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cbf5e0679a0b19b4e5f4a430e58e5a0db2b75e3a85995a617a2fbdbb14670de"} err="failed to get container status \"5cbf5e0679a0b19b4e5f4a430e58e5a0db2b75e3a85995a617a2fbdbb14670de\": rpc error: code = NotFound desc = could not find container \"5cbf5e0679a0b19b4e5f4a430e58e5a0db2b75e3a85995a617a2fbdbb14670de\": container with ID starting with 5cbf5e0679a0b19b4e5f4a430e58e5a0db2b75e3a85995a617a2fbdbb14670de not found: ID does not exist" Jan 23 18:14:09 crc kubenswrapper[4718]: I0123 18:14:09.164557 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5416e38f-02ba-4fa3-a8ff-3e85c68e87d2" path="/var/lib/kubelet/pods/5416e38f-02ba-4fa3-a8ff-3e85c68e87d2/volumes" Jan 23 18:14:18 crc kubenswrapper[4718]: I0123 18:14:18.636944 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-54c9dfbc84-hsbh6_1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3/kube-rbac-proxy/0.log" Jan 23 18:14:18 crc kubenswrapper[4718]: I0123 18:14:18.692787 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-54c9dfbc84-hsbh6_1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3/manager/1.log" Jan 23 18:14:18 crc kubenswrapper[4718]: I0123 18:14:18.868664 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-54c9dfbc84-hsbh6_1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3/manager/0.log" Jan 23 18:14:30 crc kubenswrapper[4718]: I0123 18:14:30.770368 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-zp26h_cbd98c6e-aa7f-4040-86a5-f3ef246c3a52/prometheus-operator/0.log" Jan 23 18:14:30 crc kubenswrapper[4718]: I0123 18:14:30.944527 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_c539786a-23c4-4f13-a3d7-d2166df63aed/prometheus-operator-admission-webhook/0.log" Jan 23 18:14:31 crc kubenswrapper[4718]: I0123 18:14:31.024728 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_38c76550-362b-4f9e-b1fa-58de8a6356a9/prometheus-operator-admission-webhook/0.log" Jan 23 18:14:31 crc kubenswrapper[4718]: I0123 18:14:31.166823 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-5qrhk_d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0/operator/0.log" Jan 23 18:14:31 crc kubenswrapper[4718]: I0123 18:14:31.232088 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-js85h_206601f2-166b-4dcf-9f9b-77a64e3f6c5b/observability-ui-dashboards/0.log" Jan 23 18:14:31 crc kubenswrapper[4718]: I0123 18:14:31.348971 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-b9lkr_8a751218-1b91-4c7f-be34-ea4036ca440f/perses-operator/0.log" Jan 23 18:14:45 crc kubenswrapper[4718]: I0123 18:14:45.495676 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-6gvnk_cfab1ac5-2db7-41ea-8dba-31bdc0e1b22a/cluster-logging-operator/0.log" Jan 23 18:14:45 crc kubenswrapper[4718]: I0123 18:14:45.657985 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-b6hvd_e45ebe67-eb65-4bf4-8d7b-f03a7113f22e/collector/0.log" Jan 23 18:14:45 crc kubenswrapper[4718]: I0123 18:14:45.692564 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_c3a642bb-f3f3-4e14-9442-0aa47e1b7b43/loki-compactor/0.log" Jan 23 18:14:45 crc kubenswrapper[4718]: I0123 18:14:45.874929 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f48ff8847-c72c6_46bec7ac-b95d-425d-ab7a-4a669278b158/gateway/0.log" Jan 23 18:14:45 crc kubenswrapper[4718]: I0123 18:14:45.898113 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-tpqfn_c2309723-2af5-455a-8f21-41e08e80d045/loki-distributor/0.log" Jan 23 18:14:45 crc kubenswrapper[4718]: I0123 18:14:45.984228 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f48ff8847-c72c6_46bec7ac-b95d-425d-ab7a-4a669278b158/opa/0.log" Jan 23 18:14:46 crc kubenswrapper[4718]: I0123 18:14:46.070605 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f48ff8847-th7vf_5aca942e-fa67-4679-a257-6db5cf93a95a/opa/0.log" Jan 23 18:14:46 crc kubenswrapper[4718]: I0123 18:14:46.083458 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f48ff8847-th7vf_5aca942e-fa67-4679-a257-6db5cf93a95a/gateway/0.log" Jan 23 18:14:46 crc kubenswrapper[4718]: I0123 18:14:46.206225 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_014d7cb2-435f-4a6f-85af-6bc6553d6704/loki-index-gateway/0.log" Jan 23 18:14:46 crc kubenswrapper[4718]: I0123 18:14:46.337114 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_cdb096c7-cef2-48a8-9f83-4752311a02be/loki-ingester/0.log" Jan 23 18:14:46 crc kubenswrapper[4718]: I0123 18:14:46.463078 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-2vjw9_03861098-f572-4ace-ab3b-7fddb749da7d/loki-querier/0.log" Jan 23 18:14:46 crc kubenswrapper[4718]: I0123 18:14:46.522075 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-5cw9h_96cdf9bc-4893-4918-94e9-a23212e8ec5c/loki-query-frontend/0.log" Jan 23 18:14:59 crc kubenswrapper[4718]: I0123 18:14:59.703262 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pd8td_9e7b3c6e-a339-4412-aecf-1091bfc315a5/kube-rbac-proxy/0.log" Jan 23 18:14:59 crc kubenswrapper[4718]: I0123 18:14:59.969312 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pd8td_9e7b3c6e-a339-4412-aecf-1091bfc315a5/controller/0.log" Jan 23 18:14:59 crc kubenswrapper[4718]: I0123 18:14:59.997059 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-frr-files/0.log" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.108874 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-frr-files/0.log" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.148465 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-reloader/0.log" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.188460 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-reloader/0.log" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.190591 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-metrics/0.log" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.233138 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z"] Jan 23 18:15:00 crc kubenswrapper[4718]: E0123 18:15:00.233783 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5416e38f-02ba-4fa3-a8ff-3e85c68e87d2" containerName="extract-content" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.233801 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5416e38f-02ba-4fa3-a8ff-3e85c68e87d2" containerName="extract-content" Jan 23 18:15:00 crc kubenswrapper[4718]: E0123 18:15:00.233813 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5416e38f-02ba-4fa3-a8ff-3e85c68e87d2" containerName="registry-server" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.233819 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5416e38f-02ba-4fa3-a8ff-3e85c68e87d2" containerName="registry-server" Jan 23 18:15:00 crc kubenswrapper[4718]: E0123 18:15:00.233837 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5416e38f-02ba-4fa3-a8ff-3e85c68e87d2" containerName="extract-utilities" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.233845 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5416e38f-02ba-4fa3-a8ff-3e85c68e87d2" containerName="extract-utilities" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.234108 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="5416e38f-02ba-4fa3-a8ff-3e85c68e87d2" containerName="registry-server" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.234956 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.239516 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.239537 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.256118 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z"] Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.299770 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7qf5\" (UniqueName: \"kubernetes.io/projected/1b88d431-5a34-441c-a61e-7b79628e9608-kube-api-access-x7qf5\") pod \"collect-profiles-29486535-7xn8z\" (UID: \"1b88d431-5a34-441c-a61e-7b79628e9608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.299881 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b88d431-5a34-441c-a61e-7b79628e9608-secret-volume\") pod \"collect-profiles-29486535-7xn8z\" (UID: \"1b88d431-5a34-441c-a61e-7b79628e9608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.299965 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b88d431-5a34-441c-a61e-7b79628e9608-config-volume\") pod \"collect-profiles-29486535-7xn8z\" (UID: \"1b88d431-5a34-441c-a61e-7b79628e9608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.407406 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b88d431-5a34-441c-a61e-7b79628e9608-secret-volume\") pod \"collect-profiles-29486535-7xn8z\" (UID: \"1b88d431-5a34-441c-a61e-7b79628e9608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.407748 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b88d431-5a34-441c-a61e-7b79628e9608-config-volume\") pod \"collect-profiles-29486535-7xn8z\" (UID: \"1b88d431-5a34-441c-a61e-7b79628e9608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.408010 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7qf5\" (UniqueName: \"kubernetes.io/projected/1b88d431-5a34-441c-a61e-7b79628e9608-kube-api-access-x7qf5\") pod \"collect-profiles-29486535-7xn8z\" (UID: \"1b88d431-5a34-441c-a61e-7b79628e9608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.409400 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b88d431-5a34-441c-a61e-7b79628e9608-config-volume\") pod \"collect-profiles-29486535-7xn8z\" (UID: \"1b88d431-5a34-441c-a61e-7b79628e9608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.418824 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b88d431-5a34-441c-a61e-7b79628e9608-secret-volume\") pod \"collect-profiles-29486535-7xn8z\" (UID: \"1b88d431-5a34-441c-a61e-7b79628e9608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.432002 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7qf5\" (UniqueName: \"kubernetes.io/projected/1b88d431-5a34-441c-a61e-7b79628e9608-kube-api-access-x7qf5\") pod \"collect-profiles-29486535-7xn8z\" (UID: \"1b88d431-5a34-441c-a61e-7b79628e9608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.464367 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-frr-files/0.log" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.465731 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-reloader/0.log" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.481684 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-metrics/0.log" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.496606 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-metrics/0.log" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.568270 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.656765 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-frr-files/0.log" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.678824 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-metrics/0.log" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.678881 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/cp-reloader/0.log" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.738529 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/controller/0.log" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.923883 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/kube-rbac-proxy/0.log" Jan 23 18:15:00 crc kubenswrapper[4718]: I0123 18:15:00.949750 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/frr-metrics/0.log" Jan 23 18:15:01 crc kubenswrapper[4718]: I0123 18:15:01.008835 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/kube-rbac-proxy-frr/0.log" Jan 23 18:15:01 crc kubenswrapper[4718]: I0123 18:15:01.119560 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z"] Jan 23 18:15:01 crc kubenswrapper[4718]: I0123 18:15:01.182730 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/reloader/0.log" Jan 23 18:15:01 crc kubenswrapper[4718]: I0123 18:15:01.272090 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-2kdmh_0a04951a-b116-4c6f-ad48-4742051ef181/frr-k8s-webhook-server/0.log" Jan 23 18:15:01 crc kubenswrapper[4718]: I0123 18:15:01.462233 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z" event={"ID":"1b88d431-5a34-441c-a61e-7b79628e9608","Type":"ContainerStarted","Data":"218a1fc53607e30e4b897a956dd6874613f9fe28fd9c4e7a5a3f7bc932d10723"} Jan 23 18:15:01 crc kubenswrapper[4718]: I0123 18:15:01.462276 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z" event={"ID":"1b88d431-5a34-441c-a61e-7b79628e9608","Type":"ContainerStarted","Data":"53843a3ce73702082ba3cf1de24d2dcb93288516c9c07685e40db59851ac1360"} Jan 23 18:15:01 crc kubenswrapper[4718]: I0123 18:15:01.483797 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z" podStartSLOduration=1.483777256 podStartE2EDuration="1.483777256s" podCreationTimestamp="2026-01-23 18:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:15:01.478478312 +0000 UTC m=+7102.625720293" watchObservedRunningTime="2026-01-23 18:15:01.483777256 +0000 UTC m=+7102.631019247" Jan 23 18:15:01 crc kubenswrapper[4718]: I0123 18:15:01.488766 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-85fcf7954b-5fmcn_7eb6e283-9137-4b68-88b1-9a9dccb9fcd5/manager/1.log" Jan 23 18:15:01 crc kubenswrapper[4718]: I0123 18:15:01.539239 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-85fcf7954b-5fmcn_7eb6e283-9137-4b68-88b1-9a9dccb9fcd5/manager/0.log" Jan 23 18:15:01 crc kubenswrapper[4718]: I0123 18:15:01.795892 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6865b95b75-sk5rr_b6a8f377-b8e9-4241-a0fa-b40031d27cd7/webhook-server/0.log" Jan 23 18:15:01 crc kubenswrapper[4718]: I0123 18:15:01.922437 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rfc5b_b47f2ba5-694f-4929-9932-a844b35ba149/kube-rbac-proxy/0.log" Jan 23 18:15:02 crc kubenswrapper[4718]: I0123 18:15:02.476014 4718 generic.go:334] "Generic (PLEG): container finished" podID="1b88d431-5a34-441c-a61e-7b79628e9608" containerID="218a1fc53607e30e4b897a956dd6874613f9fe28fd9c4e7a5a3f7bc932d10723" exitCode=0 Jan 23 18:15:02 crc kubenswrapper[4718]: I0123 18:15:02.476070 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z" event={"ID":"1b88d431-5a34-441c-a61e-7b79628e9608","Type":"ContainerDied","Data":"218a1fc53607e30e4b897a956dd6874613f9fe28fd9c4e7a5a3f7bc932d10723"} Jan 23 18:15:02 crc kubenswrapper[4718]: I0123 18:15:02.697061 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rfc5b_b47f2ba5-694f-4929-9932-a844b35ba149/speaker/0.log" Jan 23 18:15:03 crc kubenswrapper[4718]: I0123 18:15:03.165484 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dk28g_e444ff80-712b-463e-9fb2-646835a025f9/frr/0.log" Jan 23 18:15:03 crc kubenswrapper[4718]: I0123 18:15:03.858435 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z" Jan 23 18:15:03 crc kubenswrapper[4718]: I0123 18:15:03.907045 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7qf5\" (UniqueName: \"kubernetes.io/projected/1b88d431-5a34-441c-a61e-7b79628e9608-kube-api-access-x7qf5\") pod \"1b88d431-5a34-441c-a61e-7b79628e9608\" (UID: \"1b88d431-5a34-441c-a61e-7b79628e9608\") " Jan 23 18:15:03 crc kubenswrapper[4718]: I0123 18:15:03.907287 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b88d431-5a34-441c-a61e-7b79628e9608-config-volume\") pod \"1b88d431-5a34-441c-a61e-7b79628e9608\" (UID: \"1b88d431-5a34-441c-a61e-7b79628e9608\") " Jan 23 18:15:03 crc kubenswrapper[4718]: I0123 18:15:03.907573 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b88d431-5a34-441c-a61e-7b79628e9608-secret-volume\") pod \"1b88d431-5a34-441c-a61e-7b79628e9608\" (UID: \"1b88d431-5a34-441c-a61e-7b79628e9608\") " Jan 23 18:15:03 crc kubenswrapper[4718]: I0123 18:15:03.907828 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b88d431-5a34-441c-a61e-7b79628e9608-config-volume" (OuterVolumeSpecName: "config-volume") pod "1b88d431-5a34-441c-a61e-7b79628e9608" (UID: "1b88d431-5a34-441c-a61e-7b79628e9608"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:15:03 crc kubenswrapper[4718]: I0123 18:15:03.908268 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b88d431-5a34-441c-a61e-7b79628e9608-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:03 crc kubenswrapper[4718]: I0123 18:15:03.913144 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b88d431-5a34-441c-a61e-7b79628e9608-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1b88d431-5a34-441c-a61e-7b79628e9608" (UID: "1b88d431-5a34-441c-a61e-7b79628e9608"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:15:03 crc kubenswrapper[4718]: I0123 18:15:03.913289 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b88d431-5a34-441c-a61e-7b79628e9608-kube-api-access-x7qf5" (OuterVolumeSpecName: "kube-api-access-x7qf5") pod "1b88d431-5a34-441c-a61e-7b79628e9608" (UID: "1b88d431-5a34-441c-a61e-7b79628e9608"). InnerVolumeSpecName "kube-api-access-x7qf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:15:04 crc kubenswrapper[4718]: I0123 18:15:04.011382 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7qf5\" (UniqueName: \"kubernetes.io/projected/1b88d431-5a34-441c-a61e-7b79628e9608-kube-api-access-x7qf5\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:04 crc kubenswrapper[4718]: I0123 18:15:04.011421 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b88d431-5a34-441c-a61e-7b79628e9608-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:04 crc kubenswrapper[4718]: I0123 18:15:04.497914 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z" event={"ID":"1b88d431-5a34-441c-a61e-7b79628e9608","Type":"ContainerDied","Data":"53843a3ce73702082ba3cf1de24d2dcb93288516c9c07685e40db59851ac1360"} Jan 23 18:15:04 crc kubenswrapper[4718]: I0123 18:15:04.498245 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53843a3ce73702082ba3cf1de24d2dcb93288516c9c07685e40db59851ac1360" Jan 23 18:15:04 crc kubenswrapper[4718]: I0123 18:15:04.498141 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-7xn8z" Jan 23 18:15:04 crc kubenswrapper[4718]: I0123 18:15:04.564236 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9"] Jan 23 18:15:04 crc kubenswrapper[4718]: I0123 18:15:04.576553 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486490-bzsv9"] Jan 23 18:15:05 crc kubenswrapper[4718]: I0123 18:15:05.156048 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af2c9b22-a18d-445d-a59c-1b4daf0f0977" path="/var/lib/kubelet/pods/af2c9b22-a18d-445d-a59c-1b4daf0f0977/volumes" Jan 23 18:15:11 crc kubenswrapper[4718]: I0123 18:15:11.081958 4718 scope.go:117] "RemoveContainer" containerID="bceeb28e6af31116828c0d7d9225ffb209d11f1a61f93ffca4fb197681e0f94b" Jan 23 18:15:15 crc kubenswrapper[4718]: I0123 18:15:15.020409 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2_78a945c6-373e-4c80-acb5-1dd5a14c2be6/util/0.log" Jan 23 18:15:15 crc kubenswrapper[4718]: I0123 18:15:15.256955 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2_78a945c6-373e-4c80-acb5-1dd5a14c2be6/pull/0.log" Jan 23 18:15:15 crc kubenswrapper[4718]: I0123 18:15:15.262235 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2_78a945c6-373e-4c80-acb5-1dd5a14c2be6/util/0.log" Jan 23 18:15:15 crc kubenswrapper[4718]: I0123 18:15:15.273929 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2_78a945c6-373e-4c80-acb5-1dd5a14c2be6/pull/0.log" Jan 23 18:15:15 crc kubenswrapper[4718]: I0123 18:15:15.458722 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2_78a945c6-373e-4c80-acb5-1dd5a14c2be6/util/0.log" Jan 23 18:15:15 crc kubenswrapper[4718]: I0123 18:15:15.524687 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2_78a945c6-373e-4c80-acb5-1dd5a14c2be6/extract/0.log" Jan 23 18:15:15 crc kubenswrapper[4718]: I0123 18:15:15.552610 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v8gb2_78a945c6-373e-4c80-acb5-1dd5a14c2be6/pull/0.log" Jan 23 18:15:15 crc kubenswrapper[4718]: I0123 18:15:15.645392 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s_dd644261-7d52-41e2-935a-56fde296d6b3/util/0.log" Jan 23 18:15:15 crc kubenswrapper[4718]: I0123 18:15:15.837154 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s_dd644261-7d52-41e2-935a-56fde296d6b3/util/0.log" Jan 23 18:15:15 crc kubenswrapper[4718]: I0123 18:15:15.947809 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s_dd644261-7d52-41e2-935a-56fde296d6b3/pull/0.log" Jan 23 18:15:15 crc kubenswrapper[4718]: I0123 18:15:15.947792 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s_dd644261-7d52-41e2-935a-56fde296d6b3/pull/0.log" Jan 23 18:15:15 crc kubenswrapper[4718]: I0123 18:15:15.990796 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s_dd644261-7d52-41e2-935a-56fde296d6b3/pull/0.log" Jan 23 18:15:16 crc kubenswrapper[4718]: I0123 18:15:16.125416 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s_dd644261-7d52-41e2-935a-56fde296d6b3/extract/0.log" Jan 23 18:15:16 crc kubenswrapper[4718]: I0123 18:15:16.136760 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lc9s_dd644261-7d52-41e2-935a-56fde296d6b3/util/0.log" Jan 23 18:15:16 crc kubenswrapper[4718]: I0123 18:15:16.234881 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4_76c75fbc-0e43-4fac-81e9-06bf925c0a1e/util/0.log" Jan 23 18:15:16 crc kubenswrapper[4718]: I0123 18:15:16.336391 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4_76c75fbc-0e43-4fac-81e9-06bf925c0a1e/util/0.log" Jan 23 18:15:16 crc kubenswrapper[4718]: I0123 18:15:16.366384 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4_76c75fbc-0e43-4fac-81e9-06bf925c0a1e/pull/0.log" Jan 23 18:15:16 crc kubenswrapper[4718]: I0123 18:15:16.404047 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4_76c75fbc-0e43-4fac-81e9-06bf925c0a1e/pull/0.log" Jan 23 18:15:16 crc kubenswrapper[4718]: I0123 18:15:16.563553 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4_76c75fbc-0e43-4fac-81e9-06bf925c0a1e/util/0.log" Jan 23 18:15:16 crc kubenswrapper[4718]: I0123 18:15:16.589419 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4_76c75fbc-0e43-4fac-81e9-06bf925c0a1e/extract/0.log" Jan 23 18:15:16 crc kubenswrapper[4718]: I0123 18:15:16.591354 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcdv4_76c75fbc-0e43-4fac-81e9-06bf925c0a1e/pull/0.log" Jan 23 18:15:16 crc kubenswrapper[4718]: I0123 18:15:16.760290 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2_f3a19209-a098-4d8f-8c8c-9b345cb3c185/util/0.log" Jan 23 18:15:16 crc kubenswrapper[4718]: I0123 18:15:16.941836 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2_f3a19209-a098-4d8f-8c8c-9b345cb3c185/pull/0.log" Jan 23 18:15:16 crc kubenswrapper[4718]: I0123 18:15:16.943777 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2_f3a19209-a098-4d8f-8c8c-9b345cb3c185/util/0.log" Jan 23 18:15:16 crc kubenswrapper[4718]: I0123 18:15:16.955045 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2_f3a19209-a098-4d8f-8c8c-9b345cb3c185/pull/0.log" Jan 23 18:15:17 crc kubenswrapper[4718]: I0123 18:15:17.131591 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2_f3a19209-a098-4d8f-8c8c-9b345cb3c185/extract/0.log" Jan 23 18:15:17 crc kubenswrapper[4718]: I0123 18:15:17.143788 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2_f3a19209-a098-4d8f-8c8c-9b345cb3c185/pull/0.log" Jan 23 18:15:17 crc kubenswrapper[4718]: I0123 18:15:17.164514 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71349ph2_f3a19209-a098-4d8f-8c8c-9b345cb3c185/util/0.log" Jan 23 18:15:17 crc kubenswrapper[4718]: I0123 18:15:17.306441 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm_6bc79f32-12bc-4d6d-ad7d-ebe468f6e164/util/0.log" Jan 23 18:15:17 crc kubenswrapper[4718]: I0123 18:15:17.462107 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm_6bc79f32-12bc-4d6d-ad7d-ebe468f6e164/util/0.log" Jan 23 18:15:17 crc kubenswrapper[4718]: I0123 18:15:17.487112 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm_6bc79f32-12bc-4d6d-ad7d-ebe468f6e164/pull/0.log" Jan 23 18:15:17 crc kubenswrapper[4718]: I0123 18:15:17.494061 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm_6bc79f32-12bc-4d6d-ad7d-ebe468f6e164/pull/0.log" Jan 23 18:15:17 crc kubenswrapper[4718]: I0123 18:15:17.644816 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm_6bc79f32-12bc-4d6d-ad7d-ebe468f6e164/util/0.log" Jan 23 18:15:17 crc kubenswrapper[4718]: I0123 18:15:17.736055 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm_6bc79f32-12bc-4d6d-ad7d-ebe468f6e164/extract/0.log" Jan 23 18:15:17 crc kubenswrapper[4718]: I0123 18:15:17.904746 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08txtvm_6bc79f32-12bc-4d6d-ad7d-ebe468f6e164/pull/0.log" Jan 23 18:15:17 crc kubenswrapper[4718]: I0123 18:15:17.966909 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d2rjh_2bec1314-07f0-4c53-bad8-5eeb60b833a3/extract-utilities/0.log" Jan 23 18:15:18 crc kubenswrapper[4718]: I0123 18:15:18.204791 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d2rjh_2bec1314-07f0-4c53-bad8-5eeb60b833a3/extract-content/0.log" Jan 23 18:15:18 crc kubenswrapper[4718]: I0123 18:15:18.215294 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d2rjh_2bec1314-07f0-4c53-bad8-5eeb60b833a3/extract-content/0.log" Jan 23 18:15:18 crc kubenswrapper[4718]: I0123 18:15:18.240447 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d2rjh_2bec1314-07f0-4c53-bad8-5eeb60b833a3/extract-utilities/0.log" Jan 23 18:15:18 crc kubenswrapper[4718]: I0123 18:15:18.400544 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d2rjh_2bec1314-07f0-4c53-bad8-5eeb60b833a3/extract-utilities/0.log" Jan 23 18:15:18 crc kubenswrapper[4718]: I0123 18:15:18.411674 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d2rjh_2bec1314-07f0-4c53-bad8-5eeb60b833a3/extract-content/0.log" Jan 23 18:15:18 crc kubenswrapper[4718]: I0123 18:15:18.626375 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k8pd8_fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44/extract-utilities/0.log" Jan 23 18:15:18 crc kubenswrapper[4718]: I0123 18:15:18.888173 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k8pd8_fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44/extract-utilities/0.log" Jan 23 18:15:18 crc kubenswrapper[4718]: I0123 18:15:18.909733 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k8pd8_fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44/extract-content/0.log" Jan 23 18:15:19 crc kubenswrapper[4718]: I0123 18:15:19.008917 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k8pd8_fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44/extract-content/0.log" Jan 23 18:15:19 crc kubenswrapper[4718]: I0123 18:15:19.194621 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k8pd8_fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44/extract-content/0.log" Jan 23 18:15:19 crc kubenswrapper[4718]: I0123 18:15:19.331655 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k8pd8_fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44/extract-utilities/0.log" Jan 23 18:15:19 crc kubenswrapper[4718]: I0123 18:15:19.493758 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7bzfg_ad5b2aea-ec41-49cb-ac4b-0497fed12dab/marketplace-operator/1.log" Jan 23 18:15:19 crc kubenswrapper[4718]: I0123 18:15:19.670882 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7bzfg_ad5b2aea-ec41-49cb-ac4b-0497fed12dab/marketplace-operator/0.log" Jan 23 18:15:19 crc kubenswrapper[4718]: I0123 18:15:19.917262 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfk5b_c8488e45-93b0-4584-af48-deee2924279f/extract-utilities/0.log" Jan 23 18:15:20 crc kubenswrapper[4718]: I0123 18:15:20.109754 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfk5b_c8488e45-93b0-4584-af48-deee2924279f/extract-utilities/0.log" Jan 23 18:15:20 crc kubenswrapper[4718]: I0123 18:15:20.124727 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfk5b_c8488e45-93b0-4584-af48-deee2924279f/extract-content/0.log" Jan 23 18:15:20 crc kubenswrapper[4718]: I0123 18:15:20.168756 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfk5b_c8488e45-93b0-4584-af48-deee2924279f/extract-content/0.log" Jan 23 18:15:20 crc kubenswrapper[4718]: I0123 18:15:20.239165 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d2rjh_2bec1314-07f0-4c53-bad8-5eeb60b833a3/registry-server/0.log" Jan 23 18:15:20 crc kubenswrapper[4718]: I0123 18:15:20.345451 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfk5b_c8488e45-93b0-4584-af48-deee2924279f/extract-content/0.log" Jan 23 18:15:20 crc kubenswrapper[4718]: I0123 18:15:20.429768 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfk5b_c8488e45-93b0-4584-af48-deee2924279f/extract-utilities/0.log" Jan 23 18:15:20 crc kubenswrapper[4718]: I0123 18:15:20.582392 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k8pd8_fa6a4e4f-b2dc-4c25-8989-7dc2a5860d44/registry-server/0.log" Jan 23 18:15:20 crc kubenswrapper[4718]: I0123 18:15:20.616532 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vb52_3a97ed77-1e1b-447a-9a88-6d43f803f9d9/extract-utilities/0.log" Jan 23 18:15:20 crc kubenswrapper[4718]: I0123 18:15:20.738599 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfk5b_c8488e45-93b0-4584-af48-deee2924279f/registry-server/0.log" Jan 23 18:15:20 crc kubenswrapper[4718]: I0123 18:15:20.804158 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vb52_3a97ed77-1e1b-447a-9a88-6d43f803f9d9/extract-content/0.log" Jan 23 18:15:20 crc kubenswrapper[4718]: I0123 18:15:20.833485 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vb52_3a97ed77-1e1b-447a-9a88-6d43f803f9d9/extract-utilities/0.log" Jan 23 18:15:20 crc kubenswrapper[4718]: I0123 18:15:20.855142 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vb52_3a97ed77-1e1b-447a-9a88-6d43f803f9d9/extract-content/0.log" Jan 23 18:15:21 crc kubenswrapper[4718]: I0123 18:15:21.009081 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vb52_3a97ed77-1e1b-447a-9a88-6d43f803f9d9/extract-utilities/0.log" Jan 23 18:15:21 crc kubenswrapper[4718]: I0123 18:15:21.010673 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vb52_3a97ed77-1e1b-447a-9a88-6d43f803f9d9/extract-content/0.log" Jan 23 18:15:21 crc kubenswrapper[4718]: I0123 18:15:21.931113 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5vb52_3a97ed77-1e1b-447a-9a88-6d43f803f9d9/registry-server/0.log" Jan 23 18:15:32 crc kubenswrapper[4718]: I0123 18:15:32.702660 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-86ddc4996-kh5d8_38c76550-362b-4f9e-b1fa-58de8a6356a9/prometheus-operator-admission-webhook/0.log" Jan 23 18:15:32 crc kubenswrapper[4718]: I0123 18:15:32.713797 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-zp26h_cbd98c6e-aa7f-4040-86a5-f3ef246c3a52/prometheus-operator/0.log" Jan 23 18:15:32 crc kubenswrapper[4718]: I0123 18:15:32.741507 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-86ddc4996-hdhf4_c539786a-23c4-4f13-a3d7-d2166df63aed/prometheus-operator-admission-webhook/0.log" Jan 23 18:15:32 crc kubenswrapper[4718]: I0123 18:15:32.890454 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-js85h_206601f2-166b-4dcf-9f9b-77a64e3f6c5b/observability-ui-dashboards/0.log" Jan 23 18:15:32 crc kubenswrapper[4718]: I0123 18:15:32.903494 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-b9lkr_8a751218-1b91-4c7f-be34-ea4036ca440f/perses-operator/0.log" Jan 23 18:15:32 crc kubenswrapper[4718]: I0123 18:15:32.917857 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-5qrhk_d8c12e8f-cd36-44e0-9a71-d3b3e5bff8d0/operator/0.log" Jan 23 18:15:45 crc kubenswrapper[4718]: I0123 18:15:45.029616 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-54c9dfbc84-hsbh6_1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3/manager/1.log" Jan 23 18:15:45 crc kubenswrapper[4718]: I0123 18:15:45.058097 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-54c9dfbc84-hsbh6_1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3/kube-rbac-proxy/0.log" Jan 23 18:15:45 crc kubenswrapper[4718]: I0123 18:15:45.140075 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-54c9dfbc84-hsbh6_1615a4d2-6fc4-47ce-8f53-8dd0acc7eba3/manager/0.log" Jan 23 18:16:11 crc kubenswrapper[4718]: I0123 18:16:11.955873 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9t529"] Jan 23 18:16:11 crc kubenswrapper[4718]: E0123 18:16:11.957063 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b88d431-5a34-441c-a61e-7b79628e9608" containerName="collect-profiles" Jan 23 18:16:11 crc kubenswrapper[4718]: I0123 18:16:11.957081 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b88d431-5a34-441c-a61e-7b79628e9608" containerName="collect-profiles" Jan 23 18:16:11 crc kubenswrapper[4718]: I0123 18:16:11.957375 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b88d431-5a34-441c-a61e-7b79628e9608" containerName="collect-profiles" Jan 23 18:16:11 crc kubenswrapper[4718]: I0123 18:16:11.961509 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t529" Jan 23 18:16:11 crc kubenswrapper[4718]: I0123 18:16:11.984848 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9t529"] Jan 23 18:16:12 crc kubenswrapper[4718]: I0123 18:16:12.079775 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmjmk\" (UniqueName: \"kubernetes.io/projected/4199de1e-7e3f-43f8-8673-17f8df6641ab-kube-api-access-cmjmk\") pod \"community-operators-9t529\" (UID: \"4199de1e-7e3f-43f8-8673-17f8df6641ab\") " pod="openshift-marketplace/community-operators-9t529" Jan 23 18:16:12 crc kubenswrapper[4718]: I0123 18:16:12.080005 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4199de1e-7e3f-43f8-8673-17f8df6641ab-catalog-content\") pod \"community-operators-9t529\" (UID: \"4199de1e-7e3f-43f8-8673-17f8df6641ab\") " pod="openshift-marketplace/community-operators-9t529" Jan 23 18:16:12 crc kubenswrapper[4718]: I0123 18:16:12.080436 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4199de1e-7e3f-43f8-8673-17f8df6641ab-utilities\") pod \"community-operators-9t529\" (UID: \"4199de1e-7e3f-43f8-8673-17f8df6641ab\") " pod="openshift-marketplace/community-operators-9t529" Jan 23 18:16:12 crc kubenswrapper[4718]: I0123 18:16:12.182783 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmjmk\" (UniqueName: \"kubernetes.io/projected/4199de1e-7e3f-43f8-8673-17f8df6641ab-kube-api-access-cmjmk\") pod \"community-operators-9t529\" (UID: \"4199de1e-7e3f-43f8-8673-17f8df6641ab\") " pod="openshift-marketplace/community-operators-9t529" Jan 23 18:16:12 crc kubenswrapper[4718]: I0123 18:16:12.182830 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4199de1e-7e3f-43f8-8673-17f8df6641ab-catalog-content\") pod \"community-operators-9t529\" (UID: \"4199de1e-7e3f-43f8-8673-17f8df6641ab\") " pod="openshift-marketplace/community-operators-9t529" Jan 23 18:16:12 crc kubenswrapper[4718]: I0123 18:16:12.182993 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4199de1e-7e3f-43f8-8673-17f8df6641ab-utilities\") pod \"community-operators-9t529\" (UID: \"4199de1e-7e3f-43f8-8673-17f8df6641ab\") " pod="openshift-marketplace/community-operators-9t529" Jan 23 18:16:12 crc kubenswrapper[4718]: I0123 18:16:12.183542 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4199de1e-7e3f-43f8-8673-17f8df6641ab-catalog-content\") pod \"community-operators-9t529\" (UID: \"4199de1e-7e3f-43f8-8673-17f8df6641ab\") " pod="openshift-marketplace/community-operators-9t529" Jan 23 18:16:12 crc kubenswrapper[4718]: I0123 18:16:12.183558 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4199de1e-7e3f-43f8-8673-17f8df6641ab-utilities\") pod \"community-operators-9t529\" (UID: \"4199de1e-7e3f-43f8-8673-17f8df6641ab\") " pod="openshift-marketplace/community-operators-9t529" Jan 23 18:16:12 crc kubenswrapper[4718]: I0123 18:16:12.203998 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmjmk\" (UniqueName: \"kubernetes.io/projected/4199de1e-7e3f-43f8-8673-17f8df6641ab-kube-api-access-cmjmk\") pod \"community-operators-9t529\" (UID: \"4199de1e-7e3f-43f8-8673-17f8df6641ab\") " pod="openshift-marketplace/community-operators-9t529" Jan 23 18:16:12 crc kubenswrapper[4718]: I0123 18:16:12.290019 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t529" Jan 23 18:16:12 crc kubenswrapper[4718]: I0123 18:16:12.974046 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9t529"] Jan 23 18:16:13 crc kubenswrapper[4718]: I0123 18:16:13.264445 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t529" event={"ID":"4199de1e-7e3f-43f8-8673-17f8df6641ab","Type":"ContainerStarted","Data":"0bca76c61822d4aa845f8eadeced6bd25ff60c75c9288f3a2901f29937462b1a"} Jan 23 18:16:13 crc kubenswrapper[4718]: I0123 18:16:13.264788 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t529" event={"ID":"4199de1e-7e3f-43f8-8673-17f8df6641ab","Type":"ContainerStarted","Data":"573d5a5c703b07eb0824f9872d20b9751080eeb1d40474c271982aa206c510e7"} Jan 23 18:16:14 crc kubenswrapper[4718]: I0123 18:16:14.275755 4718 generic.go:334] "Generic (PLEG): container finished" podID="4199de1e-7e3f-43f8-8673-17f8df6641ab" containerID="0bca76c61822d4aa845f8eadeced6bd25ff60c75c9288f3a2901f29937462b1a" exitCode=0 Jan 23 18:16:14 crc kubenswrapper[4718]: I0123 18:16:14.275884 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t529" event={"ID":"4199de1e-7e3f-43f8-8673-17f8df6641ab","Type":"ContainerDied","Data":"0bca76c61822d4aa845f8eadeced6bd25ff60c75c9288f3a2901f29937462b1a"} Jan 23 18:16:15 crc kubenswrapper[4718]: I0123 18:16:15.289399 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t529" event={"ID":"4199de1e-7e3f-43f8-8673-17f8df6641ab","Type":"ContainerStarted","Data":"d0029bd14c06858612071ec12feae1b72db830ec136c5ae5b7b0b9d7fe991f85"} Jan 23 18:16:16 crc kubenswrapper[4718]: I0123 18:16:16.306060 4718 generic.go:334] "Generic (PLEG): container finished" podID="4199de1e-7e3f-43f8-8673-17f8df6641ab" containerID="d0029bd14c06858612071ec12feae1b72db830ec136c5ae5b7b0b9d7fe991f85" exitCode=0 Jan 23 18:16:16 crc kubenswrapper[4718]: I0123 18:16:16.306105 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t529" event={"ID":"4199de1e-7e3f-43f8-8673-17f8df6641ab","Type":"ContainerDied","Data":"d0029bd14c06858612071ec12feae1b72db830ec136c5ae5b7b0b9d7fe991f85"} Jan 23 18:16:17 crc kubenswrapper[4718]: I0123 18:16:17.320451 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t529" event={"ID":"4199de1e-7e3f-43f8-8673-17f8df6641ab","Type":"ContainerStarted","Data":"5162bdfcf58289b8acc87199dd1f2372a3b2e1ea8f36d9c49de98a16a155297a"} Jan 23 18:16:22 crc kubenswrapper[4718]: I0123 18:16:22.291789 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9t529" Jan 23 18:16:22 crc kubenswrapper[4718]: I0123 18:16:22.292114 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9t529" Jan 23 18:16:22 crc kubenswrapper[4718]: I0123 18:16:22.341756 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9t529" Jan 23 18:16:22 crc kubenswrapper[4718]: I0123 18:16:22.379055 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9t529" podStartSLOduration=8.880419488 podStartE2EDuration="11.379036136s" podCreationTimestamp="2026-01-23 18:16:11 +0000 UTC" firstStartedPulling="2026-01-23 18:16:14.278113044 +0000 UTC m=+7175.425355025" lastFinishedPulling="2026-01-23 18:16:16.776729682 +0000 UTC m=+7177.923971673" observedRunningTime="2026-01-23 18:16:17.344668671 +0000 UTC m=+7178.491910672" watchObservedRunningTime="2026-01-23 18:16:22.379036136 +0000 UTC m=+7183.526278127" Jan 23 18:16:22 crc kubenswrapper[4718]: I0123 18:16:22.465025 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9t529" Jan 23 18:16:22 crc kubenswrapper[4718]: I0123 18:16:22.593248 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9t529"] Jan 23 18:16:24 crc kubenswrapper[4718]: I0123 18:16:24.435582 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9t529" podUID="4199de1e-7e3f-43f8-8673-17f8df6641ab" containerName="registry-server" containerID="cri-o://5162bdfcf58289b8acc87199dd1f2372a3b2e1ea8f36d9c49de98a16a155297a" gracePeriod=2 Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.013684 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t529" Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.125005 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4199de1e-7e3f-43f8-8673-17f8df6641ab-utilities\") pod \"4199de1e-7e3f-43f8-8673-17f8df6641ab\" (UID: \"4199de1e-7e3f-43f8-8673-17f8df6641ab\") " Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.125692 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4199de1e-7e3f-43f8-8673-17f8df6641ab-catalog-content\") pod \"4199de1e-7e3f-43f8-8673-17f8df6641ab\" (UID: \"4199de1e-7e3f-43f8-8673-17f8df6641ab\") " Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.125826 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmjmk\" (UniqueName: \"kubernetes.io/projected/4199de1e-7e3f-43f8-8673-17f8df6641ab-kube-api-access-cmjmk\") pod \"4199de1e-7e3f-43f8-8673-17f8df6641ab\" (UID: \"4199de1e-7e3f-43f8-8673-17f8df6641ab\") " Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.125966 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4199de1e-7e3f-43f8-8673-17f8df6641ab-utilities" (OuterVolumeSpecName: "utilities") pod "4199de1e-7e3f-43f8-8673-17f8df6641ab" (UID: "4199de1e-7e3f-43f8-8673-17f8df6641ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.126671 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4199de1e-7e3f-43f8-8673-17f8df6641ab-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.133990 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4199de1e-7e3f-43f8-8673-17f8df6641ab-kube-api-access-cmjmk" (OuterVolumeSpecName: "kube-api-access-cmjmk") pod "4199de1e-7e3f-43f8-8673-17f8df6641ab" (UID: "4199de1e-7e3f-43f8-8673-17f8df6641ab"). InnerVolumeSpecName "kube-api-access-cmjmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.181772 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4199de1e-7e3f-43f8-8673-17f8df6641ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4199de1e-7e3f-43f8-8673-17f8df6641ab" (UID: "4199de1e-7e3f-43f8-8673-17f8df6641ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.229260 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmjmk\" (UniqueName: \"kubernetes.io/projected/4199de1e-7e3f-43f8-8673-17f8df6641ab-kube-api-access-cmjmk\") on node \"crc\" DevicePath \"\"" Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.229302 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4199de1e-7e3f-43f8-8673-17f8df6641ab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.448535 4718 generic.go:334] "Generic (PLEG): container finished" podID="4199de1e-7e3f-43f8-8673-17f8df6641ab" containerID="5162bdfcf58289b8acc87199dd1f2372a3b2e1ea8f36d9c49de98a16a155297a" exitCode=0 Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.448602 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t529" event={"ID":"4199de1e-7e3f-43f8-8673-17f8df6641ab","Type":"ContainerDied","Data":"5162bdfcf58289b8acc87199dd1f2372a3b2e1ea8f36d9c49de98a16a155297a"} Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.449737 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t529" event={"ID":"4199de1e-7e3f-43f8-8673-17f8df6641ab","Type":"ContainerDied","Data":"573d5a5c703b07eb0824f9872d20b9751080eeb1d40474c271982aa206c510e7"} Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.449767 4718 scope.go:117] "RemoveContainer" containerID="5162bdfcf58289b8acc87199dd1f2372a3b2e1ea8f36d9c49de98a16a155297a" Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.448652 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t529" Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.475187 4718 scope.go:117] "RemoveContainer" containerID="d0029bd14c06858612071ec12feae1b72db830ec136c5ae5b7b0b9d7fe991f85" Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.492222 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9t529"] Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.503497 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9t529"] Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.503970 4718 scope.go:117] "RemoveContainer" containerID="0bca76c61822d4aa845f8eadeced6bd25ff60c75c9288f3a2901f29937462b1a" Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.567881 4718 scope.go:117] "RemoveContainer" containerID="5162bdfcf58289b8acc87199dd1f2372a3b2e1ea8f36d9c49de98a16a155297a" Jan 23 18:16:25 crc kubenswrapper[4718]: E0123 18:16:25.571358 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5162bdfcf58289b8acc87199dd1f2372a3b2e1ea8f36d9c49de98a16a155297a\": container with ID starting with 5162bdfcf58289b8acc87199dd1f2372a3b2e1ea8f36d9c49de98a16a155297a not found: ID does not exist" containerID="5162bdfcf58289b8acc87199dd1f2372a3b2e1ea8f36d9c49de98a16a155297a" Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.571408 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5162bdfcf58289b8acc87199dd1f2372a3b2e1ea8f36d9c49de98a16a155297a"} err="failed to get container status \"5162bdfcf58289b8acc87199dd1f2372a3b2e1ea8f36d9c49de98a16a155297a\": rpc error: code = NotFound desc = could not find container \"5162bdfcf58289b8acc87199dd1f2372a3b2e1ea8f36d9c49de98a16a155297a\": container with ID starting with 5162bdfcf58289b8acc87199dd1f2372a3b2e1ea8f36d9c49de98a16a155297a not found: ID does not exist" Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.571447 4718 scope.go:117] "RemoveContainer" containerID="d0029bd14c06858612071ec12feae1b72db830ec136c5ae5b7b0b9d7fe991f85" Jan 23 18:16:25 crc kubenswrapper[4718]: E0123 18:16:25.572020 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0029bd14c06858612071ec12feae1b72db830ec136c5ae5b7b0b9d7fe991f85\": container with ID starting with d0029bd14c06858612071ec12feae1b72db830ec136c5ae5b7b0b9d7fe991f85 not found: ID does not exist" containerID="d0029bd14c06858612071ec12feae1b72db830ec136c5ae5b7b0b9d7fe991f85" Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.572056 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0029bd14c06858612071ec12feae1b72db830ec136c5ae5b7b0b9d7fe991f85"} err="failed to get container status \"d0029bd14c06858612071ec12feae1b72db830ec136c5ae5b7b0b9d7fe991f85\": rpc error: code = NotFound desc = could not find container \"d0029bd14c06858612071ec12feae1b72db830ec136c5ae5b7b0b9d7fe991f85\": container with ID starting with d0029bd14c06858612071ec12feae1b72db830ec136c5ae5b7b0b9d7fe991f85 not found: ID does not exist" Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.572279 4718 scope.go:117] "RemoveContainer" containerID="0bca76c61822d4aa845f8eadeced6bd25ff60c75c9288f3a2901f29937462b1a" Jan 23 18:16:25 crc kubenswrapper[4718]: E0123 18:16:25.572765 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bca76c61822d4aa845f8eadeced6bd25ff60c75c9288f3a2901f29937462b1a\": container with ID starting with 0bca76c61822d4aa845f8eadeced6bd25ff60c75c9288f3a2901f29937462b1a not found: ID does not exist" containerID="0bca76c61822d4aa845f8eadeced6bd25ff60c75c9288f3a2901f29937462b1a" Jan 23 18:16:25 crc kubenswrapper[4718]: I0123 18:16:25.572786 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bca76c61822d4aa845f8eadeced6bd25ff60c75c9288f3a2901f29937462b1a"} err="failed to get container status \"0bca76c61822d4aa845f8eadeced6bd25ff60c75c9288f3a2901f29937462b1a\": rpc error: code = NotFound desc = could not find container \"0bca76c61822d4aa845f8eadeced6bd25ff60c75c9288f3a2901f29937462b1a\": container with ID starting with 0bca76c61822d4aa845f8eadeced6bd25ff60c75c9288f3a2901f29937462b1a not found: ID does not exist" Jan 23 18:16:27 crc kubenswrapper[4718]: I0123 18:16:27.154990 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4199de1e-7e3f-43f8-8673-17f8df6641ab" path="/var/lib/kubelet/pods/4199de1e-7e3f-43f8-8673-17f8df6641ab/volumes" Jan 23 18:16:28 crc kubenswrapper[4718]: I0123 18:16:28.875470 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:16:28 crc kubenswrapper[4718]: I0123 18:16:28.875862 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:16:58 crc kubenswrapper[4718]: I0123 18:16:58.875500 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:16:58 crc kubenswrapper[4718]: I0123 18:16:58.877098 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:17:11 crc kubenswrapper[4718]: I0123 18:17:11.203917 4718 scope.go:117] "RemoveContainer" containerID="8cc9eb2cc70617a7addeab8099a5a51167ac76a25b9a4c82138de030c597e86e" Jan 23 18:17:28 crc kubenswrapper[4718]: I0123 18:17:28.875437 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:17:28 crc kubenswrapper[4718]: I0123 18:17:28.875933 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:17:28 crc kubenswrapper[4718]: I0123 18:17:28.875973 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 18:17:28 crc kubenswrapper[4718]: I0123 18:17:28.877290 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07e3bff21e1021455adee293793c290c5c0fec29dfe4e918d59c2e097fdf1504"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:17:28 crc kubenswrapper[4718]: I0123 18:17:28.877348 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://07e3bff21e1021455adee293793c290c5c0fec29dfe4e918d59c2e097fdf1504" gracePeriod=600 Jan 23 18:17:29 crc kubenswrapper[4718]: I0123 18:17:29.235085 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="07e3bff21e1021455adee293793c290c5c0fec29dfe4e918d59c2e097fdf1504" exitCode=0 Jan 23 18:17:29 crc kubenswrapper[4718]: I0123 18:17:29.235133 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"07e3bff21e1021455adee293793c290c5c0fec29dfe4e918d59c2e097fdf1504"} Jan 23 18:17:29 crc kubenswrapper[4718]: I0123 18:17:29.235173 4718 scope.go:117] "RemoveContainer" containerID="44eb1561e99e7e582638ee6a9ee76f239a7c875120477f3f1cd3f093f9c84ecc" Jan 23 18:17:30 crc kubenswrapper[4718]: I0123 18:17:30.249576 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerStarted","Data":"bf406166e989fedde3aa598e1ce3e433f7c8565425f929fc0c524666c3879054"} Jan 23 18:17:57 crc kubenswrapper[4718]: I0123 18:17:57.568219 4718 generic.go:334] "Generic (PLEG): container finished" podID="83eabaad-0a68-4c52-8829-89142370eab1" containerID="b6136415d56e87f43fc64b2ce89ef7ac0d18ff58bddac5651de40c8b74347e68" exitCode=0 Jan 23 18:17:57 crc kubenswrapper[4718]: I0123 18:17:57.568300 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mk2dv/must-gather-k4zqv" event={"ID":"83eabaad-0a68-4c52-8829-89142370eab1","Type":"ContainerDied","Data":"b6136415d56e87f43fc64b2ce89ef7ac0d18ff58bddac5651de40c8b74347e68"} Jan 23 18:17:57 crc kubenswrapper[4718]: I0123 18:17:57.570410 4718 scope.go:117] "RemoveContainer" containerID="b6136415d56e87f43fc64b2ce89ef7ac0d18ff58bddac5651de40c8b74347e68" Jan 23 18:17:57 crc kubenswrapper[4718]: I0123 18:17:57.715449 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mk2dv_must-gather-k4zqv_83eabaad-0a68-4c52-8829-89142370eab1/gather/0.log" Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.002759 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mk2dv/must-gather-k4zqv"] Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.004284 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-mk2dv/must-gather-k4zqv" podUID="83eabaad-0a68-4c52-8829-89142370eab1" containerName="copy" containerID="cri-o://a5248148424ff6320c95cf92b69e62c2b36f33eff824350205a651a68b80a3f4" gracePeriod=2 Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.024404 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mk2dv/must-gather-k4zqv"] Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.533264 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mk2dv_must-gather-k4zqv_83eabaad-0a68-4c52-8829-89142370eab1/copy/0.log" Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.534298 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mk2dv/must-gather-k4zqv" Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.626717 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/83eabaad-0a68-4c52-8829-89142370eab1-must-gather-output\") pod \"83eabaad-0a68-4c52-8829-89142370eab1\" (UID: \"83eabaad-0a68-4c52-8829-89142370eab1\") " Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.627109 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5j48\" (UniqueName: \"kubernetes.io/projected/83eabaad-0a68-4c52-8829-89142370eab1-kube-api-access-t5j48\") pod \"83eabaad-0a68-4c52-8829-89142370eab1\" (UID: \"83eabaad-0a68-4c52-8829-89142370eab1\") " Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.637110 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83eabaad-0a68-4c52-8829-89142370eab1-kube-api-access-t5j48" (OuterVolumeSpecName: "kube-api-access-t5j48") pod "83eabaad-0a68-4c52-8829-89142370eab1" (UID: "83eabaad-0a68-4c52-8829-89142370eab1"). InnerVolumeSpecName "kube-api-access-t5j48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.731546 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5j48\" (UniqueName: \"kubernetes.io/projected/83eabaad-0a68-4c52-8829-89142370eab1-kube-api-access-t5j48\") on node \"crc\" DevicePath \"\"" Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.745946 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mk2dv_must-gather-k4zqv_83eabaad-0a68-4c52-8829-89142370eab1/copy/0.log" Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.746625 4718 generic.go:334] "Generic (PLEG): container finished" podID="83eabaad-0a68-4c52-8829-89142370eab1" containerID="a5248148424ff6320c95cf92b69e62c2b36f33eff824350205a651a68b80a3f4" exitCode=143 Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.746784 4718 scope.go:117] "RemoveContainer" containerID="a5248148424ff6320c95cf92b69e62c2b36f33eff824350205a651a68b80a3f4" Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.746827 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mk2dv/must-gather-k4zqv" Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.775115 4718 scope.go:117] "RemoveContainer" containerID="b6136415d56e87f43fc64b2ce89ef7ac0d18ff58bddac5651de40c8b74347e68" Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.828167 4718 scope.go:117] "RemoveContainer" containerID="a5248148424ff6320c95cf92b69e62c2b36f33eff824350205a651a68b80a3f4" Jan 23 18:18:10 crc kubenswrapper[4718]: E0123 18:18:10.828952 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5248148424ff6320c95cf92b69e62c2b36f33eff824350205a651a68b80a3f4\": container with ID starting with a5248148424ff6320c95cf92b69e62c2b36f33eff824350205a651a68b80a3f4 not found: ID does not exist" containerID="a5248148424ff6320c95cf92b69e62c2b36f33eff824350205a651a68b80a3f4" Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.829029 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5248148424ff6320c95cf92b69e62c2b36f33eff824350205a651a68b80a3f4"} err="failed to get container status \"a5248148424ff6320c95cf92b69e62c2b36f33eff824350205a651a68b80a3f4\": rpc error: code = NotFound desc = could not find container \"a5248148424ff6320c95cf92b69e62c2b36f33eff824350205a651a68b80a3f4\": container with ID starting with a5248148424ff6320c95cf92b69e62c2b36f33eff824350205a651a68b80a3f4 not found: ID does not exist" Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.829084 4718 scope.go:117] "RemoveContainer" containerID="b6136415d56e87f43fc64b2ce89ef7ac0d18ff58bddac5651de40c8b74347e68" Jan 23 18:18:10 crc kubenswrapper[4718]: E0123 18:18:10.829811 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6136415d56e87f43fc64b2ce89ef7ac0d18ff58bddac5651de40c8b74347e68\": container with ID starting with b6136415d56e87f43fc64b2ce89ef7ac0d18ff58bddac5651de40c8b74347e68 not found: ID does not exist" containerID="b6136415d56e87f43fc64b2ce89ef7ac0d18ff58bddac5651de40c8b74347e68" Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.829890 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6136415d56e87f43fc64b2ce89ef7ac0d18ff58bddac5651de40c8b74347e68"} err="failed to get container status \"b6136415d56e87f43fc64b2ce89ef7ac0d18ff58bddac5651de40c8b74347e68\": rpc error: code = NotFound desc = could not find container \"b6136415d56e87f43fc64b2ce89ef7ac0d18ff58bddac5651de40c8b74347e68\": container with ID starting with b6136415d56e87f43fc64b2ce89ef7ac0d18ff58bddac5651de40c8b74347e68 not found: ID does not exist" Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.842062 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83eabaad-0a68-4c52-8829-89142370eab1-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "83eabaad-0a68-4c52-8829-89142370eab1" (UID: "83eabaad-0a68-4c52-8829-89142370eab1"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:18:10 crc kubenswrapper[4718]: I0123 18:18:10.936896 4718 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/83eabaad-0a68-4c52-8829-89142370eab1-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 23 18:18:11 crc kubenswrapper[4718]: I0123 18:18:11.171740 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83eabaad-0a68-4c52-8829-89142370eab1" path="/var/lib/kubelet/pods/83eabaad-0a68-4c52-8829-89142370eab1/volumes" Jan 23 18:18:11 crc kubenswrapper[4718]: I0123 18:18:11.291437 4718 scope.go:117] "RemoveContainer" containerID="b19b32d416ae11e68861eed0246c57e6bd7a9141d7843336e441522836cb8a3f" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.428414 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-npxb5"] Jan 23 18:19:35 crc kubenswrapper[4718]: E0123 18:19:35.439583 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4199de1e-7e3f-43f8-8673-17f8df6641ab" containerName="extract-content" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.439660 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4199de1e-7e3f-43f8-8673-17f8df6641ab" containerName="extract-content" Jan 23 18:19:35 crc kubenswrapper[4718]: E0123 18:19:35.439691 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4199de1e-7e3f-43f8-8673-17f8df6641ab" containerName="extract-utilities" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.439703 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4199de1e-7e3f-43f8-8673-17f8df6641ab" containerName="extract-utilities" Jan 23 18:19:35 crc kubenswrapper[4718]: E0123 18:19:35.439743 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83eabaad-0a68-4c52-8829-89142370eab1" containerName="copy" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.439751 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="83eabaad-0a68-4c52-8829-89142370eab1" containerName="copy" Jan 23 18:19:35 crc kubenswrapper[4718]: E0123 18:19:35.439775 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83eabaad-0a68-4c52-8829-89142370eab1" containerName="gather" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.439783 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="83eabaad-0a68-4c52-8829-89142370eab1" containerName="gather" Jan 23 18:19:35 crc kubenswrapper[4718]: E0123 18:19:35.439800 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4199de1e-7e3f-43f8-8673-17f8df6641ab" containerName="registry-server" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.439808 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4199de1e-7e3f-43f8-8673-17f8df6641ab" containerName="registry-server" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.440141 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4199de1e-7e3f-43f8-8673-17f8df6641ab" containerName="registry-server" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.440174 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="83eabaad-0a68-4c52-8829-89142370eab1" containerName="copy" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.440227 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="83eabaad-0a68-4c52-8829-89142370eab1" containerName="gather" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.442368 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npxb5" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.450350 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npxb5"] Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.527761 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcntg\" (UniqueName: \"kubernetes.io/projected/8e53cc84-d810-4f19-96fd-519664aef153-kube-api-access-zcntg\") pod \"redhat-marketplace-npxb5\" (UID: \"8e53cc84-d810-4f19-96fd-519664aef153\") " pod="openshift-marketplace/redhat-marketplace-npxb5" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.527998 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e53cc84-d810-4f19-96fd-519664aef153-catalog-content\") pod \"redhat-marketplace-npxb5\" (UID: \"8e53cc84-d810-4f19-96fd-519664aef153\") " pod="openshift-marketplace/redhat-marketplace-npxb5" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.528305 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e53cc84-d810-4f19-96fd-519664aef153-utilities\") pod \"redhat-marketplace-npxb5\" (UID: \"8e53cc84-d810-4f19-96fd-519664aef153\") " pod="openshift-marketplace/redhat-marketplace-npxb5" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.630960 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e53cc84-d810-4f19-96fd-519664aef153-utilities\") pod \"redhat-marketplace-npxb5\" (UID: \"8e53cc84-d810-4f19-96fd-519664aef153\") " pod="openshift-marketplace/redhat-marketplace-npxb5" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.631188 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcntg\" (UniqueName: \"kubernetes.io/projected/8e53cc84-d810-4f19-96fd-519664aef153-kube-api-access-zcntg\") pod \"redhat-marketplace-npxb5\" (UID: \"8e53cc84-d810-4f19-96fd-519664aef153\") " pod="openshift-marketplace/redhat-marketplace-npxb5" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.631240 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e53cc84-d810-4f19-96fd-519664aef153-catalog-content\") pod \"redhat-marketplace-npxb5\" (UID: \"8e53cc84-d810-4f19-96fd-519664aef153\") " pod="openshift-marketplace/redhat-marketplace-npxb5" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.631989 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e53cc84-d810-4f19-96fd-519664aef153-utilities\") pod \"redhat-marketplace-npxb5\" (UID: \"8e53cc84-d810-4f19-96fd-519664aef153\") " pod="openshift-marketplace/redhat-marketplace-npxb5" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.632005 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e53cc84-d810-4f19-96fd-519664aef153-catalog-content\") pod \"redhat-marketplace-npxb5\" (UID: \"8e53cc84-d810-4f19-96fd-519664aef153\") " pod="openshift-marketplace/redhat-marketplace-npxb5" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.668479 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcntg\" (UniqueName: \"kubernetes.io/projected/8e53cc84-d810-4f19-96fd-519664aef153-kube-api-access-zcntg\") pod \"redhat-marketplace-npxb5\" (UID: \"8e53cc84-d810-4f19-96fd-519664aef153\") " pod="openshift-marketplace/redhat-marketplace-npxb5" Jan 23 18:19:35 crc kubenswrapper[4718]: I0123 18:19:35.783244 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npxb5" Jan 23 18:19:36 crc kubenswrapper[4718]: I0123 18:19:36.280365 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npxb5"] Jan 23 18:19:37 crc kubenswrapper[4718]: I0123 18:19:37.053102 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npxb5" event={"ID":"8e53cc84-d810-4f19-96fd-519664aef153","Type":"ContainerStarted","Data":"566675a31a2a761e5cbcf1285718f1d31a5db84f3af13bb069a7e19d78fedd78"} Jan 23 18:19:37 crc kubenswrapper[4718]: I0123 18:19:37.053456 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npxb5" event={"ID":"8e53cc84-d810-4f19-96fd-519664aef153","Type":"ContainerStarted","Data":"2d7e5149d4220ed979325e88563ea3f2a902a1a5c5ccba00e1a78700586f276a"} Jan 23 18:19:38 crc kubenswrapper[4718]: I0123 18:19:38.065096 4718 generic.go:334] "Generic (PLEG): container finished" podID="8e53cc84-d810-4f19-96fd-519664aef153" containerID="566675a31a2a761e5cbcf1285718f1d31a5db84f3af13bb069a7e19d78fedd78" exitCode=0 Jan 23 18:19:38 crc kubenswrapper[4718]: I0123 18:19:38.065131 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npxb5" event={"ID":"8e53cc84-d810-4f19-96fd-519664aef153","Type":"ContainerDied","Data":"566675a31a2a761e5cbcf1285718f1d31a5db84f3af13bb069a7e19d78fedd78"} Jan 23 18:19:38 crc kubenswrapper[4718]: I0123 18:19:38.067372 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:19:40 crc kubenswrapper[4718]: I0123 18:19:40.104087 4718 generic.go:334] "Generic (PLEG): container finished" podID="8e53cc84-d810-4f19-96fd-519664aef153" containerID="c40217cec012ef4520fed5c80f5b6fedfda4d3cdce55ecff1d43b28999e22298" exitCode=0 Jan 23 18:19:40 crc kubenswrapper[4718]: I0123 18:19:40.104166 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npxb5" event={"ID":"8e53cc84-d810-4f19-96fd-519664aef153","Type":"ContainerDied","Data":"c40217cec012ef4520fed5c80f5b6fedfda4d3cdce55ecff1d43b28999e22298"} Jan 23 18:19:41 crc kubenswrapper[4718]: I0123 18:19:41.117778 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npxb5" event={"ID":"8e53cc84-d810-4f19-96fd-519664aef153","Type":"ContainerStarted","Data":"43775bbf6d54ca88e174981f4715c25bbf2102bf5f6efed368c7f199406bb489"} Jan 23 18:19:41 crc kubenswrapper[4718]: I0123 18:19:41.144832 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-npxb5" podStartSLOduration=3.723585903 podStartE2EDuration="6.144811673s" podCreationTimestamp="2026-01-23 18:19:35 +0000 UTC" firstStartedPulling="2026-01-23 18:19:38.0671523 +0000 UTC m=+7379.214394291" lastFinishedPulling="2026-01-23 18:19:40.48837807 +0000 UTC m=+7381.635620061" observedRunningTime="2026-01-23 18:19:41.13779267 +0000 UTC m=+7382.285034691" watchObservedRunningTime="2026-01-23 18:19:41.144811673 +0000 UTC m=+7382.292053674" Jan 23 18:19:45 crc kubenswrapper[4718]: I0123 18:19:45.783404 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-npxb5" Jan 23 18:19:45 crc kubenswrapper[4718]: I0123 18:19:45.784044 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-npxb5" Jan 23 18:19:45 crc kubenswrapper[4718]: I0123 18:19:45.840915 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-npxb5" Jan 23 18:19:46 crc kubenswrapper[4718]: I0123 18:19:46.215892 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-npxb5" Jan 23 18:19:46 crc kubenswrapper[4718]: I0123 18:19:46.262281 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-npxb5"] Jan 23 18:19:48 crc kubenswrapper[4718]: I0123 18:19:48.200314 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-npxb5" podUID="8e53cc84-d810-4f19-96fd-519664aef153" containerName="registry-server" containerID="cri-o://43775bbf6d54ca88e174981f4715c25bbf2102bf5f6efed368c7f199406bb489" gracePeriod=2 Jan 23 18:19:48 crc kubenswrapper[4718]: E0123 18:19:48.247576 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e53cc84_d810_4f19_96fd_519664aef153.slice/crio-43775bbf6d54ca88e174981f4715c25bbf2102bf5f6efed368c7f199406bb489.scope\": RecentStats: unable to find data in memory cache]" Jan 23 18:19:48 crc kubenswrapper[4718]: E0123 18:19:48.248188 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e53cc84_d810_4f19_96fd_519664aef153.slice/crio-43775bbf6d54ca88e174981f4715c25bbf2102bf5f6efed368c7f199406bb489.scope\": RecentStats: unable to find data in memory cache]" Jan 23 18:19:48 crc kubenswrapper[4718]: I0123 18:19:48.703227 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npxb5" Jan 23 18:19:48 crc kubenswrapper[4718]: I0123 18:19:48.865614 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e53cc84-d810-4f19-96fd-519664aef153-catalog-content\") pod \"8e53cc84-d810-4f19-96fd-519664aef153\" (UID: \"8e53cc84-d810-4f19-96fd-519664aef153\") " Jan 23 18:19:48 crc kubenswrapper[4718]: I0123 18:19:48.865754 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcntg\" (UniqueName: \"kubernetes.io/projected/8e53cc84-d810-4f19-96fd-519664aef153-kube-api-access-zcntg\") pod \"8e53cc84-d810-4f19-96fd-519664aef153\" (UID: \"8e53cc84-d810-4f19-96fd-519664aef153\") " Jan 23 18:19:48 crc kubenswrapper[4718]: I0123 18:19:48.865940 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e53cc84-d810-4f19-96fd-519664aef153-utilities\") pod \"8e53cc84-d810-4f19-96fd-519664aef153\" (UID: \"8e53cc84-d810-4f19-96fd-519664aef153\") " Jan 23 18:19:48 crc kubenswrapper[4718]: I0123 18:19:48.867192 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e53cc84-d810-4f19-96fd-519664aef153-utilities" (OuterVolumeSpecName: "utilities") pod "8e53cc84-d810-4f19-96fd-519664aef153" (UID: "8e53cc84-d810-4f19-96fd-519664aef153"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:19:48 crc kubenswrapper[4718]: I0123 18:19:48.873741 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e53cc84-d810-4f19-96fd-519664aef153-kube-api-access-zcntg" (OuterVolumeSpecName: "kube-api-access-zcntg") pod "8e53cc84-d810-4f19-96fd-519664aef153" (UID: "8e53cc84-d810-4f19-96fd-519664aef153"). InnerVolumeSpecName "kube-api-access-zcntg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:19:48 crc kubenswrapper[4718]: I0123 18:19:48.915356 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e53cc84-d810-4f19-96fd-519664aef153-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e53cc84-d810-4f19-96fd-519664aef153" (UID: "8e53cc84-d810-4f19-96fd-519664aef153"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:19:48 crc kubenswrapper[4718]: I0123 18:19:48.969051 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e53cc84-d810-4f19-96fd-519664aef153-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:19:48 crc kubenswrapper[4718]: I0123 18:19:48.969102 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcntg\" (UniqueName: \"kubernetes.io/projected/8e53cc84-d810-4f19-96fd-519664aef153-kube-api-access-zcntg\") on node \"crc\" DevicePath \"\"" Jan 23 18:19:48 crc kubenswrapper[4718]: I0123 18:19:48.969116 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e53cc84-d810-4f19-96fd-519664aef153-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:19:49 crc kubenswrapper[4718]: I0123 18:19:49.216290 4718 generic.go:334] "Generic (PLEG): container finished" podID="8e53cc84-d810-4f19-96fd-519664aef153" containerID="43775bbf6d54ca88e174981f4715c25bbf2102bf5f6efed368c7f199406bb489" exitCode=0 Jan 23 18:19:49 crc kubenswrapper[4718]: I0123 18:19:49.216356 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npxb5" event={"ID":"8e53cc84-d810-4f19-96fd-519664aef153","Type":"ContainerDied","Data":"43775bbf6d54ca88e174981f4715c25bbf2102bf5f6efed368c7f199406bb489"} Jan 23 18:19:49 crc kubenswrapper[4718]: I0123 18:19:49.216413 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npxb5" event={"ID":"8e53cc84-d810-4f19-96fd-519664aef153","Type":"ContainerDied","Data":"2d7e5149d4220ed979325e88563ea3f2a902a1a5c5ccba00e1a78700586f276a"} Jan 23 18:19:49 crc kubenswrapper[4718]: I0123 18:19:49.216435 4718 scope.go:117] "RemoveContainer" containerID="43775bbf6d54ca88e174981f4715c25bbf2102bf5f6efed368c7f199406bb489" Jan 23 18:19:49 crc kubenswrapper[4718]: I0123 18:19:49.216377 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npxb5" Jan 23 18:19:49 crc kubenswrapper[4718]: I0123 18:19:49.256755 4718 scope.go:117] "RemoveContainer" containerID="c40217cec012ef4520fed5c80f5b6fedfda4d3cdce55ecff1d43b28999e22298" Jan 23 18:19:49 crc kubenswrapper[4718]: I0123 18:19:49.267765 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-npxb5"] Jan 23 18:19:49 crc kubenswrapper[4718]: I0123 18:19:49.290413 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-npxb5"] Jan 23 18:19:49 crc kubenswrapper[4718]: I0123 18:19:49.292942 4718 scope.go:117] "RemoveContainer" containerID="566675a31a2a761e5cbcf1285718f1d31a5db84f3af13bb069a7e19d78fedd78" Jan 23 18:19:49 crc kubenswrapper[4718]: I0123 18:19:49.348579 4718 scope.go:117] "RemoveContainer" containerID="43775bbf6d54ca88e174981f4715c25bbf2102bf5f6efed368c7f199406bb489" Jan 23 18:19:49 crc kubenswrapper[4718]: E0123 18:19:49.351417 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43775bbf6d54ca88e174981f4715c25bbf2102bf5f6efed368c7f199406bb489\": container with ID starting with 43775bbf6d54ca88e174981f4715c25bbf2102bf5f6efed368c7f199406bb489 not found: ID does not exist" containerID="43775bbf6d54ca88e174981f4715c25bbf2102bf5f6efed368c7f199406bb489" Jan 23 18:19:49 crc kubenswrapper[4718]: I0123 18:19:49.351469 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43775bbf6d54ca88e174981f4715c25bbf2102bf5f6efed368c7f199406bb489"} err="failed to get container status \"43775bbf6d54ca88e174981f4715c25bbf2102bf5f6efed368c7f199406bb489\": rpc error: code = NotFound desc = could not find container \"43775bbf6d54ca88e174981f4715c25bbf2102bf5f6efed368c7f199406bb489\": container with ID starting with 43775bbf6d54ca88e174981f4715c25bbf2102bf5f6efed368c7f199406bb489 not found: ID does not exist" Jan 23 18:19:49 crc kubenswrapper[4718]: I0123 18:19:49.351500 4718 scope.go:117] "RemoveContainer" containerID="c40217cec012ef4520fed5c80f5b6fedfda4d3cdce55ecff1d43b28999e22298" Jan 23 18:19:49 crc kubenswrapper[4718]: E0123 18:19:49.352128 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c40217cec012ef4520fed5c80f5b6fedfda4d3cdce55ecff1d43b28999e22298\": container with ID starting with c40217cec012ef4520fed5c80f5b6fedfda4d3cdce55ecff1d43b28999e22298 not found: ID does not exist" containerID="c40217cec012ef4520fed5c80f5b6fedfda4d3cdce55ecff1d43b28999e22298" Jan 23 18:19:49 crc kubenswrapper[4718]: I0123 18:19:49.352317 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c40217cec012ef4520fed5c80f5b6fedfda4d3cdce55ecff1d43b28999e22298"} err="failed to get container status \"c40217cec012ef4520fed5c80f5b6fedfda4d3cdce55ecff1d43b28999e22298\": rpc error: code = NotFound desc = could not find container \"c40217cec012ef4520fed5c80f5b6fedfda4d3cdce55ecff1d43b28999e22298\": container with ID starting with c40217cec012ef4520fed5c80f5b6fedfda4d3cdce55ecff1d43b28999e22298 not found: ID does not exist" Jan 23 18:19:49 crc kubenswrapper[4718]: I0123 18:19:49.352426 4718 scope.go:117] "RemoveContainer" containerID="566675a31a2a761e5cbcf1285718f1d31a5db84f3af13bb069a7e19d78fedd78" Jan 23 18:19:49 crc kubenswrapper[4718]: E0123 18:19:49.352881 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"566675a31a2a761e5cbcf1285718f1d31a5db84f3af13bb069a7e19d78fedd78\": container with ID starting with 566675a31a2a761e5cbcf1285718f1d31a5db84f3af13bb069a7e19d78fedd78 not found: ID does not exist" containerID="566675a31a2a761e5cbcf1285718f1d31a5db84f3af13bb069a7e19d78fedd78" Jan 23 18:19:49 crc kubenswrapper[4718]: I0123 18:19:49.352922 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"566675a31a2a761e5cbcf1285718f1d31a5db84f3af13bb069a7e19d78fedd78"} err="failed to get container status \"566675a31a2a761e5cbcf1285718f1d31a5db84f3af13bb069a7e19d78fedd78\": rpc error: code = NotFound desc = could not find container \"566675a31a2a761e5cbcf1285718f1d31a5db84f3af13bb069a7e19d78fedd78\": container with ID starting with 566675a31a2a761e5cbcf1285718f1d31a5db84f3af13bb069a7e19d78fedd78 not found: ID does not exist" Jan 23 18:19:51 crc kubenswrapper[4718]: I0123 18:19:51.156191 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e53cc84-d810-4f19-96fd-519664aef153" path="/var/lib/kubelet/pods/8e53cc84-d810-4f19-96fd-519664aef153/volumes" Jan 23 18:19:55 crc kubenswrapper[4718]: I0123 18:19:55.788549 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5b6b78dc95-9ft97" podUID="6bfb9160-e4fe-4bbc-a3f6-8052a3d5ced1" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 23 18:19:58 crc kubenswrapper[4718]: I0123 18:19:58.875505 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:19:58 crc kubenswrapper[4718]: I0123 18:19:58.876045 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:19:59 crc kubenswrapper[4718]: I0123 18:19:59.556278 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f4pm9"] Jan 23 18:19:59 crc kubenswrapper[4718]: E0123 18:19:59.557360 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e53cc84-d810-4f19-96fd-519664aef153" containerName="registry-server" Jan 23 18:19:59 crc kubenswrapper[4718]: I0123 18:19:59.557388 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e53cc84-d810-4f19-96fd-519664aef153" containerName="registry-server" Jan 23 18:19:59 crc kubenswrapper[4718]: E0123 18:19:59.557424 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e53cc84-d810-4f19-96fd-519664aef153" containerName="extract-utilities" Jan 23 18:19:59 crc kubenswrapper[4718]: I0123 18:19:59.557433 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e53cc84-d810-4f19-96fd-519664aef153" containerName="extract-utilities" Jan 23 18:19:59 crc kubenswrapper[4718]: E0123 18:19:59.557461 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e53cc84-d810-4f19-96fd-519664aef153" containerName="extract-content" Jan 23 18:19:59 crc kubenswrapper[4718]: I0123 18:19:59.557495 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e53cc84-d810-4f19-96fd-519664aef153" containerName="extract-content" Jan 23 18:19:59 crc kubenswrapper[4718]: I0123 18:19:59.557840 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e53cc84-d810-4f19-96fd-519664aef153" containerName="registry-server" Jan 23 18:19:59 crc kubenswrapper[4718]: I0123 18:19:59.562715 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4pm9" Jan 23 18:19:59 crc kubenswrapper[4718]: I0123 18:19:59.576973 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f4pm9"] Jan 23 18:19:59 crc kubenswrapper[4718]: I0123 18:19:59.651495 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdaf0e86-fd3f-4d86-9de3-298d3aae43df-catalog-content\") pod \"certified-operators-f4pm9\" (UID: \"cdaf0e86-fd3f-4d86-9de3-298d3aae43df\") " pod="openshift-marketplace/certified-operators-f4pm9" Jan 23 18:19:59 crc kubenswrapper[4718]: I0123 18:19:59.651574 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdaf0e86-fd3f-4d86-9de3-298d3aae43df-utilities\") pod \"certified-operators-f4pm9\" (UID: \"cdaf0e86-fd3f-4d86-9de3-298d3aae43df\") " pod="openshift-marketplace/certified-operators-f4pm9" Jan 23 18:19:59 crc kubenswrapper[4718]: I0123 18:19:59.651766 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db2wk\" (UniqueName: \"kubernetes.io/projected/cdaf0e86-fd3f-4d86-9de3-298d3aae43df-kube-api-access-db2wk\") pod \"certified-operators-f4pm9\" (UID: \"cdaf0e86-fd3f-4d86-9de3-298d3aae43df\") " pod="openshift-marketplace/certified-operators-f4pm9" Jan 23 18:19:59 crc kubenswrapper[4718]: I0123 18:19:59.753458 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdaf0e86-fd3f-4d86-9de3-298d3aae43df-catalog-content\") pod \"certified-operators-f4pm9\" (UID: \"cdaf0e86-fd3f-4d86-9de3-298d3aae43df\") " pod="openshift-marketplace/certified-operators-f4pm9" Jan 23 18:19:59 crc kubenswrapper[4718]: I0123 18:19:59.753506 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdaf0e86-fd3f-4d86-9de3-298d3aae43df-utilities\") pod \"certified-operators-f4pm9\" (UID: \"cdaf0e86-fd3f-4d86-9de3-298d3aae43df\") " pod="openshift-marketplace/certified-operators-f4pm9" Jan 23 18:19:59 crc kubenswrapper[4718]: I0123 18:19:59.753644 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db2wk\" (UniqueName: \"kubernetes.io/projected/cdaf0e86-fd3f-4d86-9de3-298d3aae43df-kube-api-access-db2wk\") pod \"certified-operators-f4pm9\" (UID: \"cdaf0e86-fd3f-4d86-9de3-298d3aae43df\") " pod="openshift-marketplace/certified-operators-f4pm9" Jan 23 18:19:59 crc kubenswrapper[4718]: I0123 18:19:59.754345 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdaf0e86-fd3f-4d86-9de3-298d3aae43df-catalog-content\") pod \"certified-operators-f4pm9\" (UID: \"cdaf0e86-fd3f-4d86-9de3-298d3aae43df\") " pod="openshift-marketplace/certified-operators-f4pm9" Jan 23 18:19:59 crc kubenswrapper[4718]: I0123 18:19:59.754368 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdaf0e86-fd3f-4d86-9de3-298d3aae43df-utilities\") pod \"certified-operators-f4pm9\" (UID: \"cdaf0e86-fd3f-4d86-9de3-298d3aae43df\") " pod="openshift-marketplace/certified-operators-f4pm9" Jan 23 18:19:59 crc kubenswrapper[4718]: I0123 18:19:59.773387 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db2wk\" (UniqueName: \"kubernetes.io/projected/cdaf0e86-fd3f-4d86-9de3-298d3aae43df-kube-api-access-db2wk\") pod \"certified-operators-f4pm9\" (UID: \"cdaf0e86-fd3f-4d86-9de3-298d3aae43df\") " pod="openshift-marketplace/certified-operators-f4pm9" Jan 23 18:19:59 crc kubenswrapper[4718]: I0123 18:19:59.896253 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4pm9" Jan 23 18:20:00 crc kubenswrapper[4718]: I0123 18:20:00.378458 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f4pm9"] Jan 23 18:20:01 crc kubenswrapper[4718]: I0123 18:20:01.357066 4718 generic.go:334] "Generic (PLEG): container finished" podID="cdaf0e86-fd3f-4d86-9de3-298d3aae43df" containerID="8c9785206f8f1de07b535b22d119480840d31f0033237ec07914babbadd4d256" exitCode=0 Jan 23 18:20:01 crc kubenswrapper[4718]: I0123 18:20:01.357182 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4pm9" event={"ID":"cdaf0e86-fd3f-4d86-9de3-298d3aae43df","Type":"ContainerDied","Data":"8c9785206f8f1de07b535b22d119480840d31f0033237ec07914babbadd4d256"} Jan 23 18:20:01 crc kubenswrapper[4718]: I0123 18:20:01.357400 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4pm9" event={"ID":"cdaf0e86-fd3f-4d86-9de3-298d3aae43df","Type":"ContainerStarted","Data":"c8c629c907e25e147453c1b7f4ba9e98cefe7650937d08e096b999dbff0cd689"} Jan 23 18:20:02 crc kubenswrapper[4718]: I0123 18:20:02.372354 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4pm9" event={"ID":"cdaf0e86-fd3f-4d86-9de3-298d3aae43df","Type":"ContainerStarted","Data":"880935b0047ab1510da96535256571ed5c62bec19476a95a15e77d5b33744dfe"} Jan 23 18:20:03 crc kubenswrapper[4718]: I0123 18:20:03.386884 4718 generic.go:334] "Generic (PLEG): container finished" podID="cdaf0e86-fd3f-4d86-9de3-298d3aae43df" containerID="880935b0047ab1510da96535256571ed5c62bec19476a95a15e77d5b33744dfe" exitCode=0 Jan 23 18:20:03 crc kubenswrapper[4718]: I0123 18:20:03.387007 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4pm9" event={"ID":"cdaf0e86-fd3f-4d86-9de3-298d3aae43df","Type":"ContainerDied","Data":"880935b0047ab1510da96535256571ed5c62bec19476a95a15e77d5b33744dfe"} Jan 23 18:20:04 crc kubenswrapper[4718]: I0123 18:20:04.402275 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4pm9" event={"ID":"cdaf0e86-fd3f-4d86-9de3-298d3aae43df","Type":"ContainerStarted","Data":"2223926cad19d1ded759bae62c09d4dc5a00b6bf828255a097e71cc8e1d585d9"} Jan 23 18:20:05 crc kubenswrapper[4718]: I0123 18:20:05.435911 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f4pm9" podStartSLOduration=3.7152895519999998 podStartE2EDuration="6.435887592s" podCreationTimestamp="2026-01-23 18:19:59 +0000 UTC" firstStartedPulling="2026-01-23 18:20:01.359676675 +0000 UTC m=+7402.506918666" lastFinishedPulling="2026-01-23 18:20:04.080274715 +0000 UTC m=+7405.227516706" observedRunningTime="2026-01-23 18:20:05.429405445 +0000 UTC m=+7406.576647436" watchObservedRunningTime="2026-01-23 18:20:05.435887592 +0000 UTC m=+7406.583129583" Jan 23 18:20:09 crc kubenswrapper[4718]: I0123 18:20:09.896782 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f4pm9" Jan 23 18:20:09 crc kubenswrapper[4718]: I0123 18:20:09.897204 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f4pm9" Jan 23 18:20:09 crc kubenswrapper[4718]: I0123 18:20:09.946672 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f4pm9" Jan 23 18:20:10 crc kubenswrapper[4718]: I0123 18:20:10.534798 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f4pm9" Jan 23 18:20:10 crc kubenswrapper[4718]: I0123 18:20:10.620547 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f4pm9"] Jan 23 18:20:12 crc kubenswrapper[4718]: I0123 18:20:12.500799 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f4pm9" podUID="cdaf0e86-fd3f-4d86-9de3-298d3aae43df" containerName="registry-server" containerID="cri-o://2223926cad19d1ded759bae62c09d4dc5a00b6bf828255a097e71cc8e1d585d9" gracePeriod=2 Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.025720 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4pm9" Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.204960 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db2wk\" (UniqueName: \"kubernetes.io/projected/cdaf0e86-fd3f-4d86-9de3-298d3aae43df-kube-api-access-db2wk\") pod \"cdaf0e86-fd3f-4d86-9de3-298d3aae43df\" (UID: \"cdaf0e86-fd3f-4d86-9de3-298d3aae43df\") " Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.205387 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdaf0e86-fd3f-4d86-9de3-298d3aae43df-catalog-content\") pod \"cdaf0e86-fd3f-4d86-9de3-298d3aae43df\" (UID: \"cdaf0e86-fd3f-4d86-9de3-298d3aae43df\") " Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.205419 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdaf0e86-fd3f-4d86-9de3-298d3aae43df-utilities\") pod \"cdaf0e86-fd3f-4d86-9de3-298d3aae43df\" (UID: \"cdaf0e86-fd3f-4d86-9de3-298d3aae43df\") " Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.206560 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdaf0e86-fd3f-4d86-9de3-298d3aae43df-utilities" (OuterVolumeSpecName: "utilities") pod "cdaf0e86-fd3f-4d86-9de3-298d3aae43df" (UID: "cdaf0e86-fd3f-4d86-9de3-298d3aae43df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.214009 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdaf0e86-fd3f-4d86-9de3-298d3aae43df-kube-api-access-db2wk" (OuterVolumeSpecName: "kube-api-access-db2wk") pod "cdaf0e86-fd3f-4d86-9de3-298d3aae43df" (UID: "cdaf0e86-fd3f-4d86-9de3-298d3aae43df"). InnerVolumeSpecName "kube-api-access-db2wk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.252376 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdaf0e86-fd3f-4d86-9de3-298d3aae43df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cdaf0e86-fd3f-4d86-9de3-298d3aae43df" (UID: "cdaf0e86-fd3f-4d86-9de3-298d3aae43df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.313860 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db2wk\" (UniqueName: \"kubernetes.io/projected/cdaf0e86-fd3f-4d86-9de3-298d3aae43df-kube-api-access-db2wk\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.313918 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdaf0e86-fd3f-4d86-9de3-298d3aae43df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.313932 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdaf0e86-fd3f-4d86-9de3-298d3aae43df-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.512782 4718 generic.go:334] "Generic (PLEG): container finished" podID="cdaf0e86-fd3f-4d86-9de3-298d3aae43df" containerID="2223926cad19d1ded759bae62c09d4dc5a00b6bf828255a097e71cc8e1d585d9" exitCode=0 Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.512828 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4pm9" event={"ID":"cdaf0e86-fd3f-4d86-9de3-298d3aae43df","Type":"ContainerDied","Data":"2223926cad19d1ded759bae62c09d4dc5a00b6bf828255a097e71cc8e1d585d9"} Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.512852 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4pm9" Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.512872 4718 scope.go:117] "RemoveContainer" containerID="2223926cad19d1ded759bae62c09d4dc5a00b6bf828255a097e71cc8e1d585d9" Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.512862 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4pm9" event={"ID":"cdaf0e86-fd3f-4d86-9de3-298d3aae43df","Type":"ContainerDied","Data":"c8c629c907e25e147453c1b7f4ba9e98cefe7650937d08e096b999dbff0cd689"} Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.538918 4718 scope.go:117] "RemoveContainer" containerID="880935b0047ab1510da96535256571ed5c62bec19476a95a15e77d5b33744dfe" Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.548262 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f4pm9"] Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.558466 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f4pm9"] Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.578048 4718 scope.go:117] "RemoveContainer" containerID="8c9785206f8f1de07b535b22d119480840d31f0033237ec07914babbadd4d256" Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.737109 4718 scope.go:117] "RemoveContainer" containerID="2223926cad19d1ded759bae62c09d4dc5a00b6bf828255a097e71cc8e1d585d9" Jan 23 18:20:13 crc kubenswrapper[4718]: E0123 18:20:13.737739 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2223926cad19d1ded759bae62c09d4dc5a00b6bf828255a097e71cc8e1d585d9\": container with ID starting with 2223926cad19d1ded759bae62c09d4dc5a00b6bf828255a097e71cc8e1d585d9 not found: ID does not exist" containerID="2223926cad19d1ded759bae62c09d4dc5a00b6bf828255a097e71cc8e1d585d9" Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.737815 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2223926cad19d1ded759bae62c09d4dc5a00b6bf828255a097e71cc8e1d585d9"} err="failed to get container status \"2223926cad19d1ded759bae62c09d4dc5a00b6bf828255a097e71cc8e1d585d9\": rpc error: code = NotFound desc = could not find container \"2223926cad19d1ded759bae62c09d4dc5a00b6bf828255a097e71cc8e1d585d9\": container with ID starting with 2223926cad19d1ded759bae62c09d4dc5a00b6bf828255a097e71cc8e1d585d9 not found: ID does not exist" Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.737850 4718 scope.go:117] "RemoveContainer" containerID="880935b0047ab1510da96535256571ed5c62bec19476a95a15e77d5b33744dfe" Jan 23 18:20:13 crc kubenswrapper[4718]: E0123 18:20:13.738534 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"880935b0047ab1510da96535256571ed5c62bec19476a95a15e77d5b33744dfe\": container with ID starting with 880935b0047ab1510da96535256571ed5c62bec19476a95a15e77d5b33744dfe not found: ID does not exist" containerID="880935b0047ab1510da96535256571ed5c62bec19476a95a15e77d5b33744dfe" Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.738573 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"880935b0047ab1510da96535256571ed5c62bec19476a95a15e77d5b33744dfe"} err="failed to get container status \"880935b0047ab1510da96535256571ed5c62bec19476a95a15e77d5b33744dfe\": rpc error: code = NotFound desc = could not find container \"880935b0047ab1510da96535256571ed5c62bec19476a95a15e77d5b33744dfe\": container with ID starting with 880935b0047ab1510da96535256571ed5c62bec19476a95a15e77d5b33744dfe not found: ID does not exist" Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.738595 4718 scope.go:117] "RemoveContainer" containerID="8c9785206f8f1de07b535b22d119480840d31f0033237ec07914babbadd4d256" Jan 23 18:20:13 crc kubenswrapper[4718]: E0123 18:20:13.739024 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c9785206f8f1de07b535b22d119480840d31f0033237ec07914babbadd4d256\": container with ID starting with 8c9785206f8f1de07b535b22d119480840d31f0033237ec07914babbadd4d256 not found: ID does not exist" containerID="8c9785206f8f1de07b535b22d119480840d31f0033237ec07914babbadd4d256" Jan 23 18:20:13 crc kubenswrapper[4718]: I0123 18:20:13.739065 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c9785206f8f1de07b535b22d119480840d31f0033237ec07914babbadd4d256"} err="failed to get container status \"8c9785206f8f1de07b535b22d119480840d31f0033237ec07914babbadd4d256\": rpc error: code = NotFound desc = could not find container \"8c9785206f8f1de07b535b22d119480840d31f0033237ec07914babbadd4d256\": container with ID starting with 8c9785206f8f1de07b535b22d119480840d31f0033237ec07914babbadd4d256 not found: ID does not exist" Jan 23 18:20:15 crc kubenswrapper[4718]: I0123 18:20:15.153824 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdaf0e86-fd3f-4d86-9de3-298d3aae43df" path="/var/lib/kubelet/pods/cdaf0e86-fd3f-4d86-9de3-298d3aae43df/volumes" Jan 23 18:20:28 crc kubenswrapper[4718]: I0123 18:20:28.875476 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:20:28 crc kubenswrapper[4718]: I0123 18:20:28.876028 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:20:58 crc kubenswrapper[4718]: I0123 18:20:58.876068 4718 patch_prober.go:28] interesting pod/machine-config-daemon-sf9rn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:20:58 crc kubenswrapper[4718]: I0123 18:20:58.876744 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:20:58 crc kubenswrapper[4718]: I0123 18:20:58.876815 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" Jan 23 18:20:58 crc kubenswrapper[4718]: I0123 18:20:58.877972 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf406166e989fedde3aa598e1ce3e433f7c8565425f929fc0c524666c3879054"} pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:20:58 crc kubenswrapper[4718]: I0123 18:20:58.878035 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerName="machine-config-daemon" containerID="cri-o://bf406166e989fedde3aa598e1ce3e433f7c8565425f929fc0c524666c3879054" gracePeriod=600 Jan 23 18:20:58 crc kubenswrapper[4718]: E0123 18:20:58.997975 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:20:59 crc kubenswrapper[4718]: I0123 18:20:59.016934 4718 generic.go:334] "Generic (PLEG): container finished" podID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" containerID="bf406166e989fedde3aa598e1ce3e433f7c8565425f929fc0c524666c3879054" exitCode=0 Jan 23 18:20:59 crc kubenswrapper[4718]: I0123 18:20:59.016981 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" event={"ID":"48ad62dc-feb1-4fb1-989b-7830ef9061c2","Type":"ContainerDied","Data":"bf406166e989fedde3aa598e1ce3e433f7c8565425f929fc0c524666c3879054"} Jan 23 18:20:59 crc kubenswrapper[4718]: I0123 18:20:59.017017 4718 scope.go:117] "RemoveContainer" containerID="07e3bff21e1021455adee293793c290c5c0fec29dfe4e918d59c2e097fdf1504" Jan 23 18:20:59 crc kubenswrapper[4718]: I0123 18:20:59.017862 4718 scope.go:117] "RemoveContainer" containerID="bf406166e989fedde3aa598e1ce3e433f7c8565425f929fc0c524666c3879054" Jan 23 18:20:59 crc kubenswrapper[4718]: E0123 18:20:59.018224 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:21:14 crc kubenswrapper[4718]: I0123 18:21:14.140709 4718 scope.go:117] "RemoveContainer" containerID="bf406166e989fedde3aa598e1ce3e433f7c8565425f929fc0c524666c3879054" Jan 23 18:21:14 crc kubenswrapper[4718]: E0123 18:21:14.141663 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:21:25 crc kubenswrapper[4718]: I0123 18:21:25.140603 4718 scope.go:117] "RemoveContainer" containerID="bf406166e989fedde3aa598e1ce3e433f7c8565425f929fc0c524666c3879054" Jan 23 18:21:25 crc kubenswrapper[4718]: E0123 18:21:25.141459 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:21:40 crc kubenswrapper[4718]: I0123 18:21:40.141584 4718 scope.go:117] "RemoveContainer" containerID="bf406166e989fedde3aa598e1ce3e433f7c8565425f929fc0c524666c3879054" Jan 23 18:21:40 crc kubenswrapper[4718]: E0123 18:21:40.142266 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:21:53 crc kubenswrapper[4718]: I0123 18:21:53.141330 4718 scope.go:117] "RemoveContainer" containerID="bf406166e989fedde3aa598e1ce3e433f7c8565425f929fc0c524666c3879054" Jan 23 18:21:53 crc kubenswrapper[4718]: E0123 18:21:53.142613 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2" Jan 23 18:22:06 crc kubenswrapper[4718]: I0123 18:22:06.141071 4718 scope.go:117] "RemoveContainer" containerID="bf406166e989fedde3aa598e1ce3e433f7c8565425f929fc0c524666c3879054" Jan 23 18:22:06 crc kubenswrapper[4718]: E0123 18:22:06.142658 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sf9rn_openshift-machine-config-operator(48ad62dc-feb1-4fb1-989b-7830ef9061c2)\"" pod="openshift-machine-config-operator/machine-config-daemon-sf9rn" podUID="48ad62dc-feb1-4fb1-989b-7830ef9061c2"